Advent of Code 2021 Reflection

Well, it took me way longer than I wanted (I did the first 20 throughout Advent, but took six months off ), but I finally finished 2021’s Advent of Code. It definitely got sloppier as I went on, but hey, I cracked 300 stars! (I only have 48 stars that I have not done at this point)

As I did in previous years, I wrote up a little blurb on each problem, and rated them in both difficulty and how much I liked it. You can find my solutions here. My rules are to use standard library only, write everything else from scratch (no reuse from past years) and lint/mypy as much as I can.

Day 1

Type of Problem: Iteration

Difficulty: Easy

Rating: 4/5

Day 1 typically is an iteration problem, where you have to loop through some data and either aggregate the data or compare the data with its previous or next values. To solve this one efficiently, I zip together the values I want to compare and track the data I want to compare:

MVP was the ability to call the first half’s function from the second.

Measurements = list[int]
def get_number_of_increasing_measurements(measurements: Measurements) -> int:
    measurement_pairs = zip(measurements, measurements[1:])
    return len([p for p in measurement_pairs if p[1] > p[0]])

def get_number_of_increasing_sliding_windows(measurements: Measurements) -> int:
    measurement_sums = [sum(w) for w in
                           zip(measurements, measurements[1:], measurements[2:])]
    return get_number_of_increasing_measurements(measurement_sums)

Read more: Advent of Code 2021 Reflection

Day 2

Type of Problem: Coordinate Systems

Difficulty: Easy

Rating: 3/5

The problems that end up dealing with figuring out where you are after a series of directions are not typically my favorite. Similar to a Mars Rover problem, it’s fairly easy to figure out, and this problem was no different. This involved tracking x and y separately, or tracking x, y , and an aim variable as you go through each move.

Day 3

Type of Problem: Binary Manipulation

Difficulty: Easy-Medium

Rating: 3/5

This involved figuring out which bits are the most common, and doing some bitwise manipulation, so nothing too bad here. However, part 2 involves filtering bitstrings based on certain criteria, which could probably trip some people up as the code bloats on them. I thought this one was okay, as it was a long problem to read for not a complex problem.

MVP goes to my part 1 solution, which zipped each bit together, joined into a new bitstring, and then used bitwise negation to figure out the power rate.

def get_power_rate(numbers: list[str]) -> int:
    bitgroups = zip(*numbers)
    gamma_bitstring = [get_most_common_bit(b) for b in bitgroups]
    gamma_rate = int(''.join(gamma_bitstring), base=2)
    # this is a bit of cheat since we know its 12 bits long
    return gamma_rate * (~gamma_rate & 0xFFF)

Day 4

Type of Problem: Game Simulation

Difficulty: Easy-Medium

Rating: 3/5

This might be a bit more tedious since you are encoding a game of bingo (several, actually) so I ranked it easy-medium because some newbies might find the complexity overwhelming. When doing things like this, I like lots of small classes with useful methods to try and manage the complexity (see below for my Board class).

I also created a game class that handled figuring out which board wins, and which ones are remaining, which made simple while loops easy to write.

class Board:

    def __init__(self, rows: list[Row]):
        assert len(rows) == 5
        self.rows: list[Row] = rows

    def has_won(self) -> bool:
        return (any(row.is_all_marked() for row in self.rows) or
                any(self.is_column_marked(index)
                    for index in range(len(self.rows[0]))))

    def is_column_marked(self, index: int) -> bool:
        return all(row.is_marked(index) for row in self.rows)

    def get_score(self) -> int:
        return sum(row.get_sum() for row in self.rows)

    def apply_move(self, move: int):
        for row in self.rows:
            try:
                position = row.index(move)
                row.mark(position)
            except ValueError:
                pass

Day 5

Type of Problem: Grid Manipulation

Difficulty: Easy

Rating: 4/5

This one involved plotting lines on a grid and finding which point overlaps the most. A simple dictionary from point to number of lines would suffice, so I of course used the collections.Counter class to do just that. Nothing too fancy here:

def get_number_of_overlapping_points_no_diagonal(
            lines: list[LineSegment]) -> int:

    non_diagonal = [l for l in lines if l.is_vertical() or l.get_slope() == 0]
    return get_number_of_overlapping_points(non_diagonal)


def get_number_of_overlapping_points(lines: list[LineSegment]) -> int:
    all_points = itertools.chain.from_iterable(
        line.get_all_points() for line in lines)
    overlaps = Counter(all_points)
    return len([val for val, count in overlaps.items() if count >= 2])

Day 6

Type of Problem: Iteration / Data Structure

Difficulty: Easy-Medium

Rating: 5/5

These sort of problems are satisfying for me. Part 1 can typically be solved by brute force, but if you try doing that for part 2, you’re going to have a bad time. In this case, we have exponential growth of lanternfish, where a new one is created after certain conditions. You could keep a list of lanternfish, but you’ll quickly exhaust memory.

However, if you reduce the problem to just keeping count of how many lanternfish are in each stage, you just have to track roughly 7 counts. collections.Counter to the rescue again:

def get_lanternfish_after(fish: list[int], days: int) -> int:
    counter = Counter(fish)
    for _ in range(days):
        counter = reproduce_fish(counter)
    return sum(counter.values())

def reproduce_fish(counter: Counter) -> Counter:
    new_counter: Counter = Counter()
    for fish, number in counter.items():
        if fish > 0:
            new_counter[fish-1] = number
    if 0 in counter:
        new_counter[6] += counter[0]
        new_counter[8] = counter[0]
    return new_counter

Day 7

Type of Problem: Iteration

Difficulty: Easy-Medium

Rating: 3/5

As I’m writing this, I’m surprised how many pure iteration problems are here in the first week. (I define an iteration problem where you have to loop over your data, and figure out some calculation on it) This time, we’re calculating how much fuel is spent to get to an average position, and you keep moving the crabs until they are there.

Here’s my somewhat convoluted block of calculating fuel spent (whether linear or geometric consumption)

def calculate_fuel_spent(crabs: list[int], pos: int, geometric=False) -> int:
    func = (lambda c: sum(range(1, abs(c - pos) + 1))) if geometric else (lambda c: abs(c - pos))
    return sum(func(c) for c in crabs)

Day 8

Type of Problem: Deduction

Difficulty: Easy-Medium

Rating: 4/5

Deduction problems involve having some obscured information, and then trying to deduce the correct mappings. Typically they are all the same idea, as you create mappings to identify things you know for sure(in this case, some 7-segment display’s digits have a unique number of wires), and then you use that information to rule out other pieces of information, adding to your mapping as you go. Typically straightforward, I appreciated this one for at least having an interesting premise where we had to figure out the different numbers represented by the 7-segement displays.

Day 9

Type of Problem: Grid Manipulation

Difficulty: Medium

Rating: 5/5

This was my favorite of the early problems. I like problems that have a visual component to them (in this case, figuring out low points of a basin). Each year, I start from scratch (I don’t compete for speed, rather I just like the practice of building components up), so this was my first chance to write my grid class (which I do every year). This year, I added a get_neighbors function so that I could find the orthogonal adjacent points.

The second part was definitely tricky, as you had to find a way to merge basins. In this case, it meant that I was creating a mapping of point to basin. I started in the top-left and iterated through all points, and if you had any point above or to the left, I merged with that basin. If you had no basin to your top or left, you were a new basin, getting a unique ID. Once I mapped point to ID, I just had to use a collections.Counter (my favorite AoC collection type it seems) to count which was the biggest.

Day 10

Type of Problem: Text Parsing

Difficulty: Easy-Medium

Rating: 3/5

The infamous parentheses balancing problem :). I’ve hit this during interviews for two out of the four jobs I’ve held, and it’s a good practice problem for dealing with stacks. This is pretty much identical to code I’ve written in interviews, but I put in some hooks to be able to track corrupt and incomplete scores. This allowed me to use the core function for both parts (I try to reuse as much as I can for both parts rather than rewriting)

(Note, when reviewing this, I should be checking if the letter is in my mapping’s keys or values instead of hardcoding the opening/closing braces.

def parse(line: str,
          on_corrupt: Callable[[str], None] = lambda _: None,
          on_incomplete: Callable[[str], None] = lambda _: None
         ):
    stack: list[str] = []
    for letter in line:
        if letter in "{[(<":
            stack.append(letter)
        if letter in ">]})":
            closing_letter = CLOSING_LETTER[stack.pop(-1)]
            if letter != closing_letter:
                on_corrupt(letter)
                return
    if stack:
        on_incomplete(''.join(stack))

Day 11

Type of Problem: Grid Manipulation

Difficulty: Easy-Medium

Rating: 4/5

I was happy to have my grid class for determining octopuses (octopodes?) interaction with their neighbors (I changed my get_neighbors function to optionally produce diagonal neighbors). In these sort of problems, it doesn’t makes sense to look at every piece of the grid, but instead checking which ones are about to “flash”. Once an octopus flashes, you just have to look at the neighbors and track flashes from them (repeating if necessary)

def flash(octopodes: Grid[Octopus]) -> int:
    flashes = 0

    for point in octopodes:
        octopodes[point] += 1

    potentials = [point for point, octopus in octopodes.items()
                  if octopus == 10]
    while potentials:
        point = potentials.pop(0)
        for neighbor_point,neighbor in octopodes.get_neighbors(point, None,
                                                               diagonal=True):
            if neighbor and neighbor != 10:
                #update original grid point
                octopodes[neighbor_point] += 1
                # it's getting bumped to >9
                if neighbor == 9:
                    potentials.append(neighbor_point)
    # reset and count flashes
    for point, octopus in octopodes.items():
        if octopus == 10:
            flashes += 1
            octopodes[point] = 0
    return flashes

Day 12

Type of Problem: Graph Theory

Difficulty: Medium

Rating: 4/5

I like graph theory problems, I really do. This problem was to find the number of paths through a cave, given that you can only visit a small cave at most once. At first I thought about Hamiltonian or Eulerian paths, but you can do it with just a breadth-first search, tracking which nodes you’ve seen already. The part 2 was allowing one revisit of a node, which just means we track a boolean whether we’ve hit it or not.

Day 13

Type of Problem: Grid Manipulation

Difficulty: Medium

Rating: 5/5

I like problems that deal with some sort of visualization; this one involved folding a piece of transparent paper with marks and tracking what’s visible. At the end, you have to see what the message is. Straightforward to setup, but I had a few minor issues dealing with the folding logic.

Here’s how I’m folding a group of points. I especially like the list comprehensions and the splat operator (*) to expand lists.

def make_fold(points: list[Point], fold: Fold) -> list[Point]:
    if fold[0] == Direction.VERTICAL:
        normal_half = [p for p in points if p[1] < fold[1]]
        to_be_reversed = [p for p in points if p[1] > fold[1]]
        reverse = [(x, 2*fold[1]-y)
                    for x,y in to_be_reversed]
    else:
        normal_half = [p for p in points if p[0] < fold[1]]
        to_be_reversed = [p for p in points if p[0] > fold[1]]
        reverse = [(2*fold[1]-x, y)
                    for x,y in to_be_reversed]
    return list(set([*normal_half, *reverse]))

Day 14

Type of Problem: Iteration / Data structure

Difficulty: Medium

Rating: 3/5

I dreaded this one seeing it was a polymer chain (I’m still stuck on this polymer chain from 2017), but this didn’t turn out nearly as bad. Anytime you see an answer that is in the billions (or more), you know that you can’t just iterate over the problem, you have to find a more intelligent way of storing it. In this case, collection.Counter comes to the rescue again (My hero for this year). In this case, I just keep track of how many pairs I see, and add to that counter each iteration. Thus, after 40 rounds, I can get an answer lightning quick, because at most I have 16 (the number of possible pairs) counts in my counter.

Day 15

Type of Problem: Graph Theory

Difficulty: Medium

Rating: 5/5

I always like a a good Djikstra’s problem, and this was no exception. Trying to find your way through a grid isn’t too bad, as you use a heap to track your progress instead of a queue (which gives you BFS). Your lowest cost paths are at the top of the heap, and you go until you find the shortest path. I could use A*, but in most AoC problems, I haven’t found it worth the time to implement the heuristic part.

MVP is the heapq module in the standard library

def get_lowest_risk(grid: Grid[int]):
    grid_side_length = int(sqrt(len(grid)))
    start = (0, 0)
    end = (grid_side_length-1, grid_side_length-1)
    smallest_cost = 10*len(grid)*len(grid)
    heap_queue: list[tuple[int, Point, set[Point]]] = [(0, start, set())]
    while heap_queue:
        cost, point, seen = heapq.heappop(heap_queue)
        if point in seen:
            continue
        if cost > smallest_cost:
            continue
        if point == end:
            smallest_cost = cost
            continue
        seen.add(point)
        neighbors = grid.get_neighbors(point, -1) # type: ignore
        for neighbor, value in neighbors:
            if value != -1:
                heapq.heappush(heap_queue, (cost + value, # type:ignore
                                            neighbor, seen))
    return smallest_cost

Day 16

Type of Problem: Parsing / Recursion

Difficulty: Medium

Rating: 3/5

Ah this brings me back to parsing network protocol packets at ADTRAN. This was recursive parsing of a packet, where packets may contain other packets. The trick for these sort of problems is to use an iterator rather than looping through, as keeping a stateful iterator will make it easy to recurse into a sub-problem

Here’s how I broke down the value vs operator packets, so that it was easy to recursively evaluate the packet.

FUNC_LOOKUP: dict[int, Callable] = {
    0: lambda *args: reduce(operator.add, args),
    1: lambda *args: reduce(operator.mul, args),
    2: lambda *args: min(args),
    3: lambda *args: max(args),
    5: lambda t1, t2: 1 if t1 > t2 else 0,
    6: lambda t1, t2: 1 if t1 < t2 else 0,
    7: lambda t1, t2: 1 if t1 == t2 else 0
}
@dataclass
class LiteralValuePacket(Packet):
    value: int

    def get_value(self) -> int:
        return self.value

@dataclass
class OperatorPacket(Packet):
    func: Callable

    def get_value(self) -> int:
        values = [p.get_value() for p in self.subpackets]
        return self.func(*values)

Day 17

Type of Problem: Simulation

Difficulty: Medium-Hard

Rating: 4/5

This is the first stumbling block of the year I had, as I didn’t finish it the first day. I had to do some math to figure out the right answers, and there was a few false starts as I tried to work out the math. Eventually, I went to a simpler solution that worked well for me. I’m not going to paste the whole solution, but I like the interplay here between itertools and zip to figure out the x-step.

def accumulator() -> Generator[tuple[int, int], None, None]:
    yield from zip(itertools.count(start=1), itertools.accumulate(itertools.count(start=1)))


def get_first_x_value(target: Target) -> int:
    return next(i for i,sum in accumulator() if sum >= target[0])

Day 18

Type of Problem: Recursion

Difficulty: Hard

Rating: 3/5

This was another rough one for me. I would not get the right answer, no matter how hard I tried, and had quite a few crashes along the way. I knew I had to do a depth-aware tree, but I had a lot of problems crossing nodes of the tree. I’m not going to paste any code, but it was atrocious. My advice for when you have problems like this, is liberal usage of asserts. I started asserting my preconditions and postconditions and found out where I had logic problems.

Day 19

Type of Problem: Deduction, Coordinate Systems

Difficulty: Hard

Rating: 4/5

Three hard problems in a row (at least for me), as you had to deduce three dimensional coordinates based on distances between points, almost like an inverse triangulation. I had to optimize code all over the place to get it to actually finish. I wrote 160 lines of Python to solve this, and once again, it’s atrocious code. However, I still liked this as far as its uniqueness goes.

Day 20

Type of Problem: Grid Manipulation

Difficulty: Medium

Rating: 3/5

Grid manipulation comes back to figure out how many pixels are lit. I actually read this one wrong and thought this was one of those cellular automation problems where you have to wait until the problem converges to a stable solution. However, once I looked a little closer, I ran it a finite amount times and got the answer quite quickly.

Day 21

Type of Problem: Dynamic Programming

Difficulty: Medium-Hard

Rating: 4/5

This is the problem that took me 6 months to complete. The first half was simple (using iterators) but the second half took me for a loop. I knew dynamic programming was the play, as I had optimal substructure and a recurrence relation with overlapping solutions. The problem was, I picked the wrong recurrence relation.

I thought that the # of universes won was as followed

Wins(Turn, Space, Score) = Sum(Possibilities * Wins(Turn - 1, Space - Roll, Score - Space) for Roll, Possibilities in DicePossibilities)

Where DicePossibilities are the ways you can roll 3, 4, 5, 6, 7, 8 ,9 with 3 dice (for instance, you can roll 4 in 3 different ways (1,1,2; 1,2,1; 2,1,1).

However, this missed some key points. I put in my answer and got a too high answer, which means that I’m double counting something (or overcounting), which is a typical problem of dynamic counting. I was looking for errors in my algorithm, never thinking that my reccurence relation was wrong). However, I was treating a player as independent, when in fact, you shouldn’t count any games in which the other player has already won (nor where you have already run either).

Thus, the recurrence relation is closer to:

Wins(P1Score, P1Space, P2Score, P2Space, IsPlayer1) = Sum(Possibilities * (Wins(P1Score-P1Space, P1Space - Roll, P2Score, P2Space, False) if IsPlayer1 else Wins(P1Score, P1Space, P2Score-P2Space, P2Space - Roll, True)) for Roll, Possibilities in DicePossibilities)

The max score you can get is 30 (hitting a 10 after getting 20 points) so its easy enough to get what you want.

Day 22

Type of Problem: Iteration / Data Structure

Difficulty: Hard

Rating: 5/5

This string of 5 or 6 problems were quite hard. I had to visualize this one with some cube magnets to figure out what I needed. The trick is, given a new cube instruction, you have to figure out if it intersects with any cube you’ve done so far. I processed my cubes backwards, because I knew that if I already did that cube, it would override any previous instruction. To figure out intersection, I would decompose the new cube instruction into six new cuboids, ignoring any parts that were intersecting. It led to this ugly piece of code

 def get_external_cubes(self, cube: 'Cuboid',) -> list['Cuboid']:

        # Assuming our dimensions are Sx, Sy, Sz
        # and our sides are (SXmin, SXmax), (SYmin, SYmax), (SZmin, SZmax)
        # Given the cuboid with dimensions Cx, Cy, Cz and the
        # and its sides are (CXmin, CXmax), (CYmin, CYmax), (CZmin, CZmax)
        # with an intercept area of
        # Xmin, Xmax, Ymin, Ymax, Zmin, zmax

        # if it intersects the cube, there are 6 cuboids that need to
        # be prepended to the list to check
        # with the following dimensions
        # C1 = (Xmin, Xmax), (CYmax, Ymax),  (Zmin, Zmax)
        #    located above the intersection with same x,z
        # C2 = (Xmin, Xmax), (Ymin, CYmin),  (Zmin, Zmax)
        #    located benath the intersection with same x,z
        # C3 = (CXmin, Xmin), (CYmin, CYmax), (Zmin, Zmax)
        #    located to the left of intersection, with full height,
        #    same z
        # C4 = (Xmax, CXmax), (CYmin, CYmax), (Zmin, Zmax)
        #    located to the right of intersection, with full height,
        #    same z
        # C5 = (CXmin, CXmax), (CYmin, CYmax), (Zmin, CZmin)
        #    located in front of intersection, full height/width
        # C6 = (CXmin, CXmax), (CYmin, CYmax), (CZmax, Zmax)
        #    located in front of intersection, full height/width

        # intercept dimensions
        if self.x_min <= cube.x_min <= cube.x_max <= self.x_max:
            xrange = (cube.x_min, cube.x_max)
        elif cube.x_min <= self.x_min <= self.x_max <= cube.x_max:
            xrange = (self.x_min, self.x_max)
        elif self.x_min <= cube.x_min < self.x_max < cube.x_max:
            xrange = (cube.x_min, self.x_max)
        elif cube.x_min < self.x_min <= cube.x_max <= self.x_max:
            xrange = (self.x_min, cube.x_max)
        else:
            assert False, "Missing condition"

        if self.z_min <= cube.z_min <= cube.z_max <= self.z_max:
            zrange = (cube.z_min, cube.z_max)
        elif cube.z_min <= self.z_min <= self.z_max <= cube.z_max:
            zrange = (self.z_min, self.z_max)
        elif self.z_min <= cube.z_min < self.z_max < cube.z_max:
            zrange = (cube.z_min, self.z_max)
        elif cube.z_min < self.z_min <= cube.z_max <= self.z_max:
            zrange = (self.z_min, cube.z_max)
        else:
            assert False, "Missing z condition"

        new_cubes = [
            Cuboid(*xrange, self.y_max + 1, cube.y_max, *zrange),
            Cuboid(*xrange, cube.y_min, self.y_min - 1, *zrange),
            Cuboid(cube.x_min, self.x_min - 1, cube.y_min, cube.y_max,
                   *zrange),
            Cuboid(self.x_max + 1, cube.x_max, cube.y_min, cube.y_max,
                   *zrange),
            Cuboid(cube.x_min, cube.x_max, cube.y_min, cube.y_max,
                   cube.z_min, self.z_min - 1),
            Cuboid(cube.x_min, cube.x_max, cube.y_min, cube.y_max,
                   self.z_max + 1, cube.z_max)
        ]
        valid_cubes = [c for c in new_cubes if c]
        assert(sum(c.get_area() for c in new_cubes if c) < cube.get_area())
        return valid_cubes

Day 23

Type of Problem: Constraint Satisfaction Problem / Graph Theory

Difficulty: Medium

Rating: 5/5

This was another problem I thought was cool. At its heart, its a game, where you enumerate all the moves that can be made (keeping mind of constraints), see what’s possible, and then recurse into a subtree. Again, this is a djikstra’s algorithm through the game’s statespace and heapq makes another appearance.

I really liked the use of generators for my branch-and-pruning:

# get the solution space for possible moves
def get_possible_moves(spaces: Spaces) -> Generator[tuple[int, Spaces],
                                                    None, None]:
    for index, space in enumerate(spaces):
        # get hallway to room moves
        if isinstance(space, str):
            # start walking back and forth until you find an empty room
            yield from move_to_room(spaces, index, 0, -1)
            yield from move_to_room(spaces, index, len(spaces) - 1, 1)
        # get room to hallway moves
        if isinstance(space, Room):
            yield from move_to_hallway(spaces, index, 0, -1)
            yield from move_to_hallway(spaces, index, len(spaces) - 1, 1)

Day 24

Type of Problem: Computer Simulation / Reverse Engineering / Recursion

Difficulty: Hard

Rating: 4/5

This one was a puzzle. Given some assembly language, I first thought that I needed to simulate a computer. Then I realized I needed to check hundreds of billions of numbers, and said no way. I started to work out the assembly language by hand, and realized that for each digit Di, there was the same code run, with variables Xi, Yi, and Zi being the only things that changed.

I worked out that the equation looked like this:

z = z // Zi
if z%26 + Xi == Di:
    z = z*26 + (Di + Yi)

From this, I could set z to zero,, solve for the previous z, and start working my way back to find out all solutions that would eventually produce my result. The code took a little longer than I wanted, but it eventually found a solution, and I eagerly looked ahead to the next solution, given that this one took me about a week or two.

Day 25

Type of Problem: Grid manipulation

Difficulty: Medium

Rating: 3/5

Last puzzle of the year, woo!. This was a simple grid with rule-based movement, that you just had to loop through until the state was the same as the last turn. It took me a little longer than I wanted, because I was not operating on a copy for ‘v’ sea cucumbers. Remember, if everything moves in unison, your code should operate on a copy, so that as you mutate your data, you aren’t influencing current turns.

Changing Jobs When You’re a Senior Engineer

It’s been about half a year since I wrote my last post, and I wanted to reflect back on some of the goals I had set for myself.  In a nutshell, I wanted to 1) Grow HSV tech community, 2) learn some cloud technologies, 3) build something, and 4) read more theoretical books.

So halfway through the year, I feel like the HSV tech community has been growing, and HSV.py is still going strong.  I still have a surprise coming, so stay tuned.  But what has derailed most of my other goals is that in April, I started a new job: Software Engineer at Canonical!

I’m working on the CPC team which focuses on public and private cloud deployments.  I mostly do Python library work and Jenkins pipelines to transform Ubuntu images to their suitable cloud images.  It’s been great, because 1) I can work in Python and 2) I am getting to learn about a lot of cloud technologies along the way.

But, this is only my third ever job.  The first time I moved jobs, I had 3 years of experience; now I have 12.  I feel like there is a lot of advice out there for when you are new and swapping jobs, but not so much when you’re a senior engineer.  Here are some tips I’ve discovered in the three months of working there.

Continue reading

Advent of Code 2018 Week 1: Recap

So I’ve decided to do Advent of Code this year again (no surprise there), but this time, I’m encouraging everyone in HSV.py to join me as well.

I’ve completed 8 challenges, and thought it was time for a recap.  I plan on breaking down solutions day by day, and then ending with some lessons learned that might help others.  I compete each night with ugly hacked together code, then work on refactoring it the next day.  What I share is the refactored version (Don’t think I spit something like this out in just an hour).  You can find all my code on my GitHub

So let’s get started.

Continue reading

Starting Tech Talks From Ground Zero

I’m at DevSpace 2018 right now, and just participated in a Open Space about Lunch and Learns.  As I have previously written, I am the curator for Tech Talks at my workplace.  Instead of talking about how our tech talks work today though, I want to give my ideas on how to start up a tech talk or lunch and learn culture from absolutely nothing.

 

Remember, you don’t have to have something fully launched day one for it to be valuable.  Be Agile.  Create your MVP for Tech Talks, and iterate on what works; throw away what doesn’t. Continue reading

The State of C++17

Whelp, I haven’t written in a while (I knew this would happen), but it’s okay.  Nothing like a dev conference to get me writing again.

 

I had been selected for two talks for DevSpace2017, which is going on as I write this.  My two talks are “C++17: Not Your Father’s C++” and “Building a Development Community In Your Workplace”

 

I delivered the C++ talk this morning, and I thought it went quite well.  What I want to address in this blog post is the state of C++, with respects to C++17.  C++11 was a huge impact to the C++ community.  We got Lambda’s, Type Inference, Variadic Templates, Smart Pointers, Type Traits, Constexpr, and a bunch of other game-changers.

 

Around the community, I’ve heard some conversations about if C++17 is a bust.  Is it a joke?  I think the reason for this is that people measure C++17 against C++11.  But this isn’t necessarily correct.  C++11 had 7 or 8 years to get moving and to introduce it’s ideas.  It came all at once, and compilers had to play catch up to deliver all these cool new features.

 

But the C++ committee realized this was subpar.  It was too much all at once.  So now they are doing something a bit different.  Smaller releases, more frequent.  (Of course I like this idea, it’s much closer to a continuous integration mindset – and yes I know that 3 years between releases is not continuous, but for compiler releases, I’ll take it).

 

So instead, I measure C++ 17 against C++14.  C++14 gave me a few things like make_unique and auto parameters in lambdas, but this wasn’t a game changer for me.  They were quality of life improvements.  C++17 on the other hand, has variants (Type-safe unions), optional, parallel algorithms, std::byte, string_view, fold expressions, plus a whole bunch of quality of life improvements.

 

People also complain about features that aren’t in the standard (concepts, ranges, modules, reflection, etc.).  I agree, it stinks that these didn’t make it.  But let’s not judge C++17 by what didn’t make it in, but instead, let’s look at what did.  When I look at it this way, I consider C++17 a great improvement to the language.

Music City Code 2017

I just got back from Music City Code 2017 and it was a blast.  It was my first time going there, and I had no idea to expect.  The talks were much better than I thought they would be, the venue was nice (Vanderbilt has a wonderful campus), and I got to meet some great people.

Let’s break down the days.

 

Day 1

Keynote: Change Your World with this One Simple Trick by Jeremy Clark (@jeremybytes)

This keynote was great.  The slides were all hand drawn, reminiscent of David Neal’s talks.  It was all about how meeting someone new can make you a better developer, and that’s what the conference was all about.

Among some of the good tips:

How to start a conversation

  1. Hi I’m <so and so>?
  2. What do you do?
  3. What technologies do you use?

And he explained how developers love to talk, but hate to start conversations.  You could tell everyone was trying this out throughout the conference, and I got to meet some interesting people.

 

F# Type Providers by Chris Gardner (@freestylecoder)

 

I had met Chris since he does the DevSpace conference here in town, and I was going to this conference to learn some new things, so I thought F# was a decent one to go learn about.  My experience with it was one day during Advent of Code, so I wanted to see some of the cool things that it had to offer.

F# Type providers was a awesome way to check the types of a data source at compile time.  The more errors you can push earlier, the better.  Why wait until runtime, when building the code can tell you if your datasource has missing or wrong types.  I’ll probably never use this, but it’s a cool concept

Data Visualization with NVD3.js and D3.js by Dustin Ewers (@dustinewers)

Dustin was a fun presenter.  He started off with a great example of what makes data visualization (the ability to interact and tell a story).  He had a great sense of humor, and presented in an interesting way.  We didn’t get as deep into D3.js as I wanted (we use it someplace at work, and I’d like to understand how it works under the hood), but he introduced me to nvd3.js to create really quick line graphs.

I was talking to some of the attendees of how to set up a Python webserver to host some of this sort of data.  I told them I’d get them something over lunch.  During lunch, I wrote up a quick script (you can find it here).  It was just using bottle to spin up a webserver and then figuring out how to load that up into a line graph in nvd3.  It was a lot simpler than I expected, and the other guys appreciated it a lot.

Lunchtime Functional Programming Panel

This was just a 4 person panel talking about how to get into functional programming and the benefits of it.  A lot of it was stuff I had already drank the Kool-Aid for, so nothing was too new.

 

A Lap Around Xamarin by Douglas Starnes (@poweredbyaltnet)

I don’t do a lot of .NET or mobile development, and this was probably the wrong talk for me.  I didn’t get a lot out of this talk, since the text was too small for me to read to understand what was going on.  It was alright, and there was nothing against the presenter or material, just ended up not being for me.

 

IoT with the ESP8266 and NodeMCU Firmware by Jason Follas (@jasonfollas)

This was an interesting talk about the options you have when building cheap IoT devices.  My co-works talk about the ESP8266 a lot, and I now know a lot more about it.  Most of this talk was explaining Lua, which I had already known, but seeing the workflow for the ESP8266 was pretty cool.

 

Day 2

ElasticSearch in an Hour by John Berryman @(jnbrymn)

I really liked this talk.  I knew nothing about ElasticSearch even though we use it at work on some other projects.  I thought this was a great introduction into how search engines work and in particular, ElasticSearch.  We talked about tokenization, stemming, relevance and indexing.  There were also some great examples contrasting a relational database and ElasticSearch.

 

Lightning Talks

So they had lightning talks in the same time frame as the other talks.  It’s a shame, because this meant there were 8 other talks going on at the same time.  As a result, only one person had registered to give a lightning talk in the two sessions I went too.  There was only about 12 people in the first one, and 4 in the next one.  So, we just did impromptu lightning talks.  I gave two talks, one recapping what I had done the day before with bottle/NVd3 and another that I gave at ADTRAN about Terrible,Terrible Things you can do in C++.  I got to learn a whole lot of other stuff, such as bayes statistics, F# Type Providers (again), Accessibility, and Interview techniques.

 

Lunchtime Software Quality Panel

It was refreshing to hear people in the industry talk candidly about how they expect people to be writing tests up front, and how to change culture to address software quality.  There was a lot of great discussion, and I agreed with most of it.

 

R: It’s Not Just For Pirates Anymore by Dustin Ewers (@dustinewers)

I liked Dustin’s talk the day before, so I decided to listen some about R.  I don’t have any plans to write any R, but it was good to know what the language was capable of.  It reminded me of Pandas in Python.  Dustin had the same good humor and relevant examples.  It was also cool to see how easy it was to write a clustering algorithm or decision tree.  It seems that machine learning is a first class citizen for this language.

 

Career Growth Questions You’re Afraid To Ask by Cassandra Faris (@cassandra Faris)

I was originally going to go to a JS talk, but I decided to do a soft skills talk instead.  This was an interesting take from a recruiter/HR perspective of how to for a new job.  It was nice to see what they were looking for in candidates, what were red flags, and the advice on how to sell yourself.

 

Wrap-up

I will definitely go back to Music City Code next year, as it was much better than I expected.  Talks were good, food was good, people were friendly, and I couldn’t ask for much more.  Plus it’s an hour and a half away from me, so it makes it much easier to be able to go check it out each year.

Alexa-Controlled Remote Part II

So I got my new LEDs in, and lo and behold, it works!

To make sure that they were blinking, I wrote a quick Python script that toggled multiple pins (right now its hardcoded to just 1 pin, but I could have changed variables to toggle others).  This was very nice, as I could have it turn on every 15 seconds or so and watch it on my phone camera.


import RPi.GPIO as GPIO
import time

gpio_start = 18
gpio_end = 19

GPIO.setmode(GPIO.BCM)

for x in range (gpio_start, gpio_end):
   GPIO.setup(x, GPIO.OUT)

while True:
   
   for y in range (gpio_start, gpio_end):
      GPIO.output(y, True)
      print 'on ', y
   time.sleep(15)
   
   for z in range (gpio_start, gpio_end):
      GPIO.output(z, False)
      print 'off ',z
   time.sleep(15)

Toggling pins through Python was nice and easy.

Continue reading

DIY Alexa-Controlled Remote Control

So, I think I’m going to embark on my first official hardware project.  I want to make something I can voice-control that allows me to do common operations on my projector and sound system (Like power on, switch to Blu-Ray, etc.)  I don’t have much of a HW background (I’ve made it maybe halfway through Make Electronics – excellent book by the way), so I figured this would be interesting.  I got a new RaspberryPi 3 for Christmas, and have been looking for a good project for it.

So, I knew I needed an IR transmitter to simulate a remote.  I figured I’d get a IR receiver as well to record signals and learn more.  So I browsed the web and found two that I liked (IR Transmitter and IR Receiver) from the same company.  It looked no-fuss as well, where I just had to connect ground, voltage, and two GPIO pins.  Didn’t sound too bad.

So let’s get started!

Continue reading

PyTennessee 2017 Day 2

Some of our party were feeling ill for the second day of PyTN, so we made it as far as my talk, and then had to boogey out of there.

Let’s see what I went to though:

 

Keynote: A deeper look at the Operating System by Sophie Rapaport (@sfrapoport)

Sophie was a good speaker, keeping us engaged for most of the talk.  While I knew most of the stuff from before, due to operating systems and working on embedded systems,it was still nice to see Sophie explain that even just with Python, you can start learning about under the hood parts of the language without diving into C.  I think it was a nice talk for all the people who had no OS experience before.  She also was able to relate to why we should care, and I’ve always liked when a speaker finds a way to connect the why with the how.

 

Scraping a great data set to predict Oscars by Deborah Hanus (@deborahhanus)

This was a nice quick talk about the methodology Deborah used to complete one of our course projects.  Her goal was to predict box office hits and Oscars using data science.  She walked through how to scrape data from multiple sources, how to analyze and clean data, and then how to present it.   I didn’t find too much of this revolutionary, but it did offer a glimpse into how easy it may be to grab a data set and go to town on it.

Lunch Lightning Talks

This was another set of good lightning talks.  We heard about writing great tutorials, Legacy Python vs Python 2, and a few others that I can’t remember

What Time is it Anyway by Greg Back (@gtback)

This was another quick talk discussing the options that you have in Python of how to get timing right.  We explored what the standard library gave us (which is great if you need to be timezone naive), and what some other libraries offered (including up to date timezone information.

BDD To The Bone by Pat Viafore (@PatViaforever)

So this was my talk!  It went quite well in my opinion, but I’m biased.  I didn’t get the audience I was hoping for (~25 people) but I saw a lot of vigorous nodding so I got some things right.  I had some immediate feedback and questions, which means people were interested.  I talked to some QA engineers from Emma, and some local Huntsville people and had some good discussions on how BDD can help people.

I had some nerves in the beginning, but I think the talk went smoothly.  See for yourself at https://youtu.be/H2FuJYlbzDg

 

We were so tired after all this though, we skipped the last two talks and headed home.  It was another great conference.  I wish I took some more time to meet more people, but I’ll have another chance when we go next year.

PyTennessee 2017 Day 1

Well, I’ve made it to another conference (they are even letting me present at this one).  I was in Nashville for PyTenessee.  I love Python, and this is a great conference and community to be a part of .

 

Keynote: The Importance of Community and Networking, by Sarah Guido (@sarah_guido)

I watched Sarah the first time I went to PyTennessee two years ago, and she had one of my favorite talks about data science.  This time, she was talking a bit about her personal journey, from classically trained trumpet player in college, to a senior data scientist at Mashable.   She gave some great tips of how to give back to the community (starting meetups, going to meetups, slack channels, open source contributions) and gave some great tips to avoiding burnout.

 

What’s in your pip toolbox? by Jon Banafato (@jonafato)

So I was trying to figure out a lightning talk, so I didn’t pay too close attention to this one, but what I did get out of it was something I’m going to go use at work.  I knew most of the pip requirements.txt information, but I learned about pip-compile and pipdeptree.  Pip-compile was nice as it helped you with a requirements.txt file based on the libraries you import, not giving you anything extraneous.   pipdeptree was a great tool to show  where your dependencies in pip are coming from.

 

Lunch Lightning Talks

There were a series of 5 minute lightning talks.  I decided thirty minutes before them that I’d write a unit test talk.  However, I fought with Linux windowing twice and never got it going :(.

Other talks were things like pipenv, xpath, and Rust community.

 

A brief introduction to concurrence and coroutines by Eric Appelt (@appeltel)

This was probably my favorite talk.  Eric did a good job with easy to understand examples, and walked through iteration, generators, and then to the new async/await syntax in Python 3.5.  I learned a lot through this, but I don’t know if I’ll get to asyncio stuff at ADTRAN.  It is in 3.5 only (we’re using legacy Python only), and it has a bit of a viral effect.

 

Let’s Build A Hash Table, In C by Jacques Woodcock (@jacqueswoodcock)

This one was alright.  I knew C pretty well, and I’ve written hash tables before, so I didn’t learn a whole lot new.  The slides were pretty good, though.

 

Big data Analysis In Python with Apache Spark, Pandas, and Matplotlib by Jared M. Smith (@jaredthecoder)

This was another great talk.  I heard Jared on Software Engineering Daily a few weeks ago, and liked that episode.  I saw his picture in the PyTN bios and recognized it and decided to go to his talk.  It was a bonus that it was about data science.  I’ve been to a few meetups where Spark was talked about, but Jared gave a good example of how to actually use it.  The Pandas and Matplotlib part felt a little tacked on, but it was good to mention it (I probably feel this way because I knew what he talked about.)  I wish we could have saw some more examples.

 

Keynote: Humaning is Hard by Courey Eliot (@dev_branch)

This was a short, but very honest talk about privilege, disabilities, mentoring, community and helping people in need.  She held everyone’s attention, and it was refreshing to see such a candid talk on such a tough subject.