CS103 Archive

This page contains archived versions of the Stanford CS103 (Mathematical Foundations of Computing) webpage in the quarters I've taught it. All internal links should be valid, though external links may no longer be functional.

2021-2022 Academic Year
Fall 2021

There's a lot to talk about in this offering of CS103! It represents a mix of topic changes, pedagogical updates, and technical upgrades.

One of the benefits of running tutorial sessions last year and doing the "revise-and-resubmit" on the exams was getting a much deeper sense of what students were getting tripped up on when writing formal proofs, especially toward the start of the quarter. In doing so, I found there were two major issues that needed to be addressed. First, the lectures, they'd historically been structured, included little guidance on a key aspect of proofwriting - distinguishing what it means to assume a universally-quantified or existentially-quantified statement and what it means to prove a universally-quantified or existentially-quantified statement. This was a blind spot in our coverage that meant students needed to infer these rules from examples - or to discover it by running up against it in the exams. The second was that learning how to write proofs for the first time while also learning about manipulating definitions in set theory was hard - in large part because proofs on sets involve assuming universally-quantified statements. I decided to take aim at these issues to see if we could reshape CS103 to give better guidance in these domains.

The larger of these two issues was getting students to differentiate between assuming and proving statements. When we had tutorial sessions running in parallel with the main lectures, there was extra time - should we choose to use it - to cover this. However, with the return to in-person instruction, I simply didn't have the bandwidth to teach lectures in person while also running enough tutorial sessions to support all the students. That meant that, if I wanted to ensure everyone got the message about this distinction, it would need to go into the lectures. But as the lectures stood, there simply wasn't any time to do something like this - there's so much material in the course that there's no free space to discuss these topics!

Based on that, I made the difficult decision to cut binary relations from CS103. Binary relations have been a mainstay of CS103 since I first started teaching the class in Fall 2011. They're incredibly useful for getting students to reason about first-order definitions, to investigate consequences of formal definitions, etc. However, it's primarily these latter attributes - the skills that binary relations exericse - rather than binary relations themselves that made them useful to CS103. By cutting them, I could spend more time focusing on the other discrete structures we cover (functions and graphs) and to fill in the needed exploration into how to write formal mathematical proofs.

Dropping binary relations freed up two lectures. I allocated one nominally to functions, and one nominally to graphs. This allowed me to change how I taught those topics. Rather than trying to cram all the definitions plus worked examples into two lectures, I was able to spread the topics around in a way that let me introduce definitions, do examples, and go through worked proofs in each case. For functions, this meant that the first lecture purely introduced what functions work, explored classes of functions (involutions, injections, and surjections), and examples of each. In doing so, we did some simpler proofs that certain functions did / did not have those properties, showing off how the first-order logic definitions gave rise to the proof shape. In the second lecture, I introduced what it meant to assume different types of statements (particularly universals) and how, from there, to prove statements like "all involutions are injective." (Some of these materials were derived from the tutorial sessions from last year, which in turn were largely derived from Amy Liu's CS103A offering.) From there, I proceeded into the lecture on cardinality more or less as usual.

In the land of graphs, adding in a new lecture allowed me to spend an initial lecture on basic definitions related to adjacency and to write proofs about defintions like independent sets and vertex covers. This allowed me to showcase the two-column proofwriting strategy and to continue to reinforce the distinction between assuming and proving things. The next lecture then discussed connectivity. One minor change I made there that I think is for the better is switching from "path" and "simple path" to "walk" and "path," which reduces the ambiguity of the terms. The second lecture on graphs also included a lovely math puzzle about teleporting a train that showcased why reasoning about paths is such a nice thing to do. And from there, the third lecture focused just on pigeonhole, using more or less the same version from last time.

The other issue I wanted to tackle was proofs on sets. After giving it some thought, I decided to cut this from CS103 this quarter. My reasoning was that they appeared so early on that it was just too much for students to try to learn all at once while taking their first steps with proofs. In retrospect, I think that this was a bit too aggressive. I think it's possible to integrate set theory proofs back into CS103 by spreading them out over a long series of problem sets. For example, we could introduce the basics in PS2, then continuously step up the difficulty of these problems over PS3, PS4, etc. However, I did notice that cutting these sorts of proofs from the problem set markedly decompressed things and made it possible to spend more time drilling fundamentals on the early problem sets.

These preceding changes - dropping binary relations and set-based proofs - necessitated major updates to the problem sets. In particular, Problem Set 1 needed to be updated to fill the gap left by set-theory proofs, Problem Set 3 needed to be rewritten almost from scratch because it almost exclusively covered binary relations, and PS6, PS7, and PS8 needed to be tweaked because dropping sets and relations removed some of the long-running threads there. I revised PS1 by integrating many of Cynthia's ideas from last year, plus resurrecting a classic CS103 problem about tiling with triominoes. On PS3, I aimed to fill the gap left by binary relations by adding a bunch of problems on functions involving reading and manipulating functions with properties specified in first-order logic. In PS6, I added a new problem on induction on strings that hinted at structural induction. And on PS7, I adjusted what we'd previously been asking to build on this new string induction paradigm.

In retrospect, I think I got many of these changes right. The revised PS1 did a good job communicating how to prove basic classes of statements in a setting where the difficulty purely lay with formalizing the reasoning. The revised PS3 enabled me to introduce concepts like "without loss of generality" that came up a ton more later in the quarter and did a great job reinforcing the distinction between assuming and proving universally-quantified statements. And the new PS6 and PS7 questions kept up exercise with induction.

However, there are two ways in which I think these problem sets will need further revision. The first is one that I didn't anticipate when cutting binary relations and sets: by doing so, the remaining problems were essentially forms of equational reasoning. This meant that students were largely writing proofs that involved manipulating equalities. There's nothing wrong with this per se, but as soon as we jumped from there to writing proofs about graphs we saw students struggling much more than usual with figuring out what counted as rigorous reasoning and what was too high-level. This is, in my opinion, pretty trivial to fix, especially if we add set theory proofs back in earlier. The other issue was one I didn't anticipate - starting with PS6, the proofs on the problem sets became very "formulaic," in the sense that they could be largely pattern-matched on earlier templates (string induction, Myhill-Nerode, etc.) That was an honest oversight on my part, and meant that the amount of general proofwriting practice on the later problem sets was lower than anticipated. In retrospect, this should have been obvious to foresee. It took a lot of work to reintegrate proofwriting into the tail end of CS103 by adding in a series building up to the converse of Myhill-Nerode. Removing relations removed that thread, which left behind a series of simpler problems that didn't ultimately culminate in a big result. With that knowledge, I have a very clear sense of what I would do in future iterations of CS103 to patch this up: replace many of those problems with further practice in discrete math, preferably tied into the topics at hand. And if we bring back set theory proofs, this will be easy to do by resurrecting old problems like the one about monoids and Kleene stars, etc.

These were probably the largest changes to CS103 in terms of structure and pacing, but they weren't the only big edits I made. The other huge edit to the course was in the treatment of Turing machines. I've always felt a bit conflicted about how we teach TMs. We start off with "TMs are automata," then stop talking about automata and start writing code, then talk about TM encodings (which were never really all that clear), then talk purely about writing code. I'd always wondered whether there was a better, cleaner way to present TMs, and this quarter, I think I found one!

The key idea is not to teach TMs, but rather to teach a TM equivalent that's a bit easier for students to wrap their heads around. This quarter, rather than doing the automaton-based version of Turing machines, we used a variation on Hao Wang's B-machine / Post's Formulation 1. Our TMs essentially consisted of assembly-language programs that control a tape head. There are a small number of primitive operations (move the tape head, write a symbol, halt, jump to a label, and conditionally execute a statement) that are all fairly easy to understand, and from there it's possible to build complex TMs by simply writing more complex programs. These TMs are composable, in the sense that it's easy to take code for one TM, copy-paste it into another, and end up with code for a more elaborate TM. For example, I shipped a debugger to students that included TMs that did things like check if the input is a Fibonacci number, convert from decimal to unary, and run the hailstone sequence.

This change had a number of knock-on effects. First, it decompressed the introductory lectures on Turing machines, since the basic rules were much easier to understand and we could always let students tinker around with TMs on their own time on the problem sets. Second, it made the jump from TMs to programs less difficult - it was clear how TMs were kinda sorta ish already like code, and by seeing actual TMs in lecture and on their own time the jump to full code was easier to understand. (It also let us ask students to design TMs for nontrivial languages on the problem sets - something students did a great job on!) Second, it made TM encodings way easier to understand, since the encoding of TM would literally just be its source code! And by decompressing the lectures, there was more time for me to talk about major ideas like how TMs and string encodings work and how they model notions of problem-solving.

In addition to this change, I made the somewhat odd decision to not teach the idea of "the language of a TM." Instead, I defined what a recognizer or decider for a language did using first-order definitions in terms of what the TM does to input strings. This removed another term for students to memorize and made it a bit easier (IMHO) when we got to ATM and the universal TM to work through what the expected behaviors ought to be.

Another minor change was spreading out the introduction of code as a TM equivalent over a longer period of time. For example, after introducing deciders and recognizers, I wrote small pieces of code for functions with the signature bool(string), then had students work out which were deciders, which were recognizers, and for which languages. I think this made the jump to undecidabiltiy via self-reference a lot easier.

The biggest change (aside from the definition of TMs), though, was in how we did undecidability via self-reference. I kept the same basic formula as before - programs get their own source code, then use a purported decider for an undecidable problem to behave in paradoxical ways. However, I decided to clean up some longstanding technical debt. Instead of framing these self-referential programs as full programs that begin in int main(), get input by calling getInput(), then accept or reject by calling accept() or reject(), I instead had the self-referential programs just be functions that take in a string and return a boolean. This alone reduced the intellectual overhead, since it's way easier to think about functions than programs with tons of auxiliary functions.

Once I'd separated things out this way, I reframed these sorts of self-reference arguments by having students think of them as a discussion between a fortune teller (a decider that can predict what arbitrary programs will do) and a trickster (someone trying to expose the fortune teller as a fraud). This worked incredibly well! It made clear that self-reference proofs involve a dialog between two different parties, and gave a canonical name (trickster) to the self-referential function that would generate the contradiction.

I also found a way to make proofs of self-reference markedly cleaner by framing them as a chain of three simple biconditional statements that lead to a contradiction. I wouldn't have been able to do so without thinking of the fortune teller/trickster paradigm and working through what about it actually caused the contradiction. Honestly, I'm really proud of this.

This change also made it a lot easier to write the problems on PS9, since it gave a simple workflow: assume you have a decider for something, then tell people to fill in the trickster function as appropriate and argue why it works.

On entry to this quarter, I'd already built out many of the materials I would end up using because I'd spent the summer doing materials prep and development. For example, I'd already created the TM debugger, many of the slides for functions and graphs, etc. However, one thing I didn't anticipate doing, but which I'm glad I did, was restructuring the course materials to make them more accessible. I had a blind student this quarter, which served as a forcing function to migrate as much as possible away from OpenOffice documents and into something that naturally works well with a screen reader. Using Julie Zelenski's amazing web framework for CS106B, I converted all of the handouts and problem sets into Markdown with LaTeX for the math. This was a huge undertaking, but having done it I'm so glad I did! The handouts and problem sets now play brilliantly with screen readers and just plain look better.

Making things accessible also required me to make the programming assignments accessible. Since I'd built the CS103 assignments using the same framework that I use for my CS106B assignments, and since I'd upgraded that for CS106B to add easy accessibility hooks, it wasn't too difficult to do this. (It did take time to figure out how to design nice command-line interfaces, and I'm not sure that I came up with the most streamlined command prompt, but like all other aspects of CS103 that's something I can continue to work on in the future.)

The lectures and the remaining animated guides now are the last hurdles to having something fully accessible. Unfortunately, I don't see a clear path from where we are now to having accessible lectures. The lectures are heavy on animations and visual intuitions for the concepts. While the OAE did a great job translating and converting things, it still requires a lot of human time investment to do this and any changes to the lectures require a round trip through OAE. I'm going to have to think about how to update this.

And, of course, there were a bunch of other odds-and-ends updates. I did a major overhaul of the second induction lecture to more directly focus on topics that trip people up in induction proofs, particularly the build-up/build-down distinction for existentially/universally-quantified statements. That lesson stuck, but had the surprising side-effect that students started trying to apply that to every induction proof, even ones that didn't involve those quantifiers. Oops. :-) Easy fix for next time. I also replaced the previous complete induction example on trees - which had a lot of intellectual overhead before the complete induction kicked in - with a new one involving counting ways to eat a chocolate bar. I think that example went over way better than the one about trees.

I also did some scattered updates to the other problem sets. I'm particularly happy with the one on induction turned out, and the one on graphs, while much slimmer than usual, is also in pretty good shape.

On the supplemental materials front, I heavily revised the Guide to Proofs and Guide to Proofs on Discrete Structures, making them mirror the new focus on assuming versus proving and on core basic patterns. I also integrated the Guide to CFGs that I developed over the summer. I think these worked well.

The last major update to the course was a shift from in-person to take-home exams. I initially set this up to maximize flexibility during an uncertain quarter - this was our first time back in the classroom since COVID first arrived, and no one really knew whether we were going to be able to keep things that way. (Fortunately, we did!) I was surprised by the numbers we saw on the exams. The first two midterms had distributions that were almost completely identical to what we saw on the exams in Fall 2019, when the exams were timed, in-person exams and when there were more topics covered. However, the scores on the final exam were much, much higher than in Fall 2019, which matches my intuition. I'm not sure how to interpret this, and I'm going to have to ask around about it.

So what should I make of these changes? The short version, I think, is the following. Many of these changes and edits are clear wins and definite keepers. Some of these edits worked less well than expected due to unexpected interactions or patterns that previously weren't exposed, but can easily be tweaked in a way that fixes them. And some of these changes, in retrospect, might have been a bit too zealous and should be revisited. But on balance, I think the course is in a much, much better state than before, and that with a bit more polish / gradient descent it'll approach a stable and effective state.

All of the changes I've mentioned above (with a few exceptions) have been content and formatting changes, because that's where the bulk of my efforts were focused. However, there are many important policy and structure changes that I think it would be good to look into. For starters, would it be good to bring back some limited form of revise-and-resubmit for the exams? We could consider, for example, letting people resubmit exams to earn up to 5% of the possible points. Next, what should we do with regards to tutorials? I folded many of the tutorial examples into the mainline lecture, and some of the tutorials were rendered obsolete given content changes. Perhaps we can bring them back as CS103A lectures, revising and updating the relevant sections based on what we changed in the mainline lectures.

There's also a question about what to do for students who are struggling and need extra one-on-one support. We set up an ad-hoc system this quarter to help out, but because it wasn't structued it relied on individual TAs to volunteer their time outside of their regular commitments. That meant that, when things got busy toward the end of the quarter, we weren't able to provide as much support as I think would have been good. Would CS103A automagically fix this? Should we build one-on-one help into the TA expectations for each week to create more slack capacity? Should we partner with CTL? Something else?

All of this is to say that there's a lot that worked great and there's a lot more left to be done. I'm excited to keep working on the course and making things work even better. :-)

2020-2021 Academic Year
Fall 2020

This quarter of CS103 ran during the COVID-19 pandemic, when the class was operated purely remotely. I co-taught the course with Cynthia Lee, and it functioned very differently from our normal structure.

Following the lead of what Cynthia piloted in CS103 in the Spring 2020 offering of the course (along with David Varodayan and Amy Liu), we structured the course as follows. The course had four components: assignments, lecture quizzes, tutorial sessions, and midterm exams. The assignments were based on the ones we typically give out during a normal quarter, with a number of changes and edits (more on that later). The lecture quizzes were designed to ensure that students were staying on top of the material. We used the videos recorded from my Fall 2019 offering of the course, releasing the videos at the normal rate of three lectures per week, and had three questions per video to ensure that students stayed on top of the course content. Tutorial sessions were designed to foster community and help prepare students for the week's assignments; Cynthia and I each ran three tutorial sessions each Monday. Finally, we held four midterms at a rate of every other week, using a "mastery learning" approach (more on that later). On balance, this system seemed to work well, and there are many aspects of the course that I would love to experiment with during regular, non-remote academic quarters.

On the subject of assignments - the assignments that we gave out this quarter were based on the ones that I developed for the Fall 2019 offering of the course, with a number of tweaks designed to make them operate more effectively in remote environment. The biggest change was, in my opinion, the revised coding assignments. Over the summer, I rewrote the DFA/NFA editor, the regex editor, and the CFG editor as downloadable C++ projects using the MiniGUI framework I'd developed for CS106B (and the SimpleTest system that Julie and I each contributed to). I also added in visualizers for the set theory programming assignment and the mathematical logic assignments, and created a binary relation editor and graph editor for use on PS3 and PS4. These shipped with local autograders, which operated far more transparently than the ones up on Gradescope while running the same tests. These, I think, were an unalloyed win compared with previous quarters. We were able to fully autograde many of the "find a binary relation where..." or "draw a graph where..." type problems while giving students continuous feedback on their progress, and no longer required any TA work behind the scenes to import data from our JS-based autograders into Gradescope. I'm proud to report that there were only a small handful of bugs in these assignments across the whole quarter, considering how new all the code was! The gnarliest bug was a 32-bit/64-bit issue that popped up on older Windows systems due to me using size_t instead of uint64_t in a bitvector implementation (oops). There's too much to describe here about all the shenanigans that went into those systems, and I'll probably write up a secondary report detailing how they were put together. But suffice it to say, I think these changes are here to stay!

Aside from the coding changes, I did some major revisions to the problem sets - especially the ones at the start of the quarter - to help better teach students proofwriting. We converted many of the problems on PS1, PS2, and PS3 to "fill-in-the-blank" proofs in which the basic structure was provided to students, who then needed to fill in missing parts. This was designed with three goals in mind. First, I wanted to model for students what the general shape and structure of a proof should look like, essentially preventing folks from going off-trail early in the proof and thereby missing the bigger picture. Second, I wanted students to get specific practice with particular aspects of proofwriting, whether that was introducing existentially-quantified variables or reasoning formally about specific definitions. And finally, I wanted to give more examples to students about what proofs ought to look like, which I figured would be especially important given that everything was being run remotely and that getting real-time assistance would be all that much harder than usual. I was very optimistic about these changes, but in all honesty I'm not sure how effective these new types of problems were. We definitely saw people writing better proofs on those particular problems than what we'd seen in the past, but it didn't seem to boost the uptick rate at which people absorbed proper proof structure. I'd like to keep experimenting with this format more to see whether it can be made more effective.

On top of this, I did an aggressive set of rewrites of some of the earlier problem sets to try to give broader coverage to different styles of proofs (assuming universals, proving universals, assuming existentials, proving existentials, etc.) In one regard, I think I succeeded here - there was a much wider array of problems on these problem sets than before. In another, I think there's still a lot of room for improvement here. Specifically, I found myself cutting a number of beloved problems and long-running chains from Fall 2019 in order to put in more "meat and potatoes" type problems. At the end of the quarter, I started cataloging all the problems I've used or developed for CS103, and I think it would be worthwhile, before my next iteration of CS103, to go back over these revised problem sets and replace earlier, less exciting problems with more interesting, thought-provoking ones on similar topics.

There were also some more particular, but still significant, areas in which I'd like to target the problem sets for improvement. The version of PS1 we used this quarter, due to an oversight on my part, didn't have many examples of "element analysis" proofs on sets (e.g. proving results about sets by going one element at a time). PS3 had no coverage at all of strict orders and one of the main questsions (odd and even functions) accidentally gave a huge advantage to students who had previously taken calculus courses. PS8 had minimal coverage of Turing machines, making PS9 the first time students ever really did anything with computability. While these aren't gigantic issues, they are things that I'm looking back over and thinking could have been done a lot better, and I'm looking forward to correcting this the next time around.

Turning to tutorials, first, let's cover the mechanics. We held hour-long tutorial sessions every Monday at six different time slots, and recorded one version from each week for folks who couldn't attend. These tutorial sessions were based on the CS103A materials Amy Liu developed in Fall 2019, which were in turn based on her Summer 2019 version of CS103, and were designed to give students a gentle transition from the lectures to the problem sets. While I have specific tweaks I'd like to make here and there to these materials, overall I think they worked really well. They were a great place for us to engage with students, to see where folks were in their understanding, and to model the sorts of problem-solving strategies we wanted them to use in the problem sets. It was also a spot where we could interact with students and vice-versa, which was definitely needed in a remote quarter!

Based on my experiences with what students were struggling the most with this quarter, I would like to go back and revise the first three or four of these tutorial sessions to better target higher-priority topics. It would have been a good idea to focus more on the mindset that goes into assuming universal statements versus proving universal statements, setting up and executing proofs on discrete structures, and doing element-analysis proofs on set theory. But I'd file this away as "now I know," rather than "boy, did I mess this up." :-)

I have been musing about whether we could introduce discussion sections into CS103 and what the proper role of CS103A should be, and these tutorial sessions seem like exactly the sort of launching point for answering these questions. Should we make tutorial sessions a mandatory part of CS103 once we're back on campus? Or should we have CS103A as our mechanism for deploying this content to students who are interested in extra practice? I'm definitely interested in playing around with this and am curious to see what comes out of it.

The last component of the course were the midterms, and they were probably the biggest component of the course in terms of time invested by Cynthia and me. Going into this quarter, we knew we were facing the challenges of teaching during a pandemic, and we were also acutely aware that the 2020 US Presidential Election was coming up in the eighth week of the quarter. After having Winter 2019 end chaotically with the start of COVID-19 and Spring 2020 end tumultuously with the George Floyd protests, we were extremely concerned that we may have yet another quarter that ended due to enormous, history-making events. We therefore made the choice to have four midterm exams, administered in the third, fifth, sevenths, and ninth weeks. This made for a lot of midterms and for much earlier testing than what I'd normally do, but gave us the flexibility that, should we have to abort the quarter in Week 8, we could do so with enough data to meaningfully assign grades and call it a day. Fortunately, we didn't end up having to do that, though we did pay that extra price in terms of student and staff time.

The midterms themselves were given out as 48-hour take-home exams. Students needed to earn a 90% score to receive credit for the exma, and if they didn't hit that bar they'd be given a week to revise and resubmit. If they didn't hit the 90% score that time, we'd give them a third chance to resubmit, but drop their overall course grade by one letter. After the initial submission, Cynthia and I made videos offering more detailed advice on each of the problems and held between four and eight extra hours of office hours to help students with the exams. This was a highly labor-intensive operation, but it seemed to work - we saw students make marked progress from the first round of submissions to the second, and by the end of the quarter we were extremely happy with the work students were submitting.

I am still calibrating how to appropriately design exams in this context. The first midterm we gave out was longer than the rest (it had multiple proofs on it, as well as a challenging short-answer question) and seemed to be the trickiest for students. This was partly due, I believe, to it going out in Week 3, when students were still acquiring key skills in proofwriting. Knowing what I know now, I probably would have kept two of the problems on that midterm (All Squared Away, which I think I might add to PS1 in the future, and Never Never Land, a problem about negated quantifiers) and cut/edited the last problem (Tiny Sets, an exploration of the set-theoretic concept of an ideal). The later midterms, on the other hand, likely could have covered a bit more ground, though I think the questions they asked were perfectly reasonable.

We also rolled out a new grading system this quarter. Instead of having individual components of the grade that were then weighted and summed up to compute a raw score, we evaluated students in the four listed categories (assignments, quizzes, tutorials, and exams), mapped each component to a letter grade (A, B, C, or NP, with no plus or minus grades), and then assigned as our final letter grade the lowest of these four components. In many ways, this simplified grading and made things transparent to students. In some others, this increased student stress, since when doing an exam resubmission individual points made a major difference. Going forward, I think we should look into doing some sort of "smoothed" system for exams so that there's more of a graceful degradation. Incidentally, this system meant that grades this quarter were significantly higher than in past quarters - it seems like people took the revise-and-resubmits seriously!

Going forward, I think there are a few major areas where I'll need to focus my efforts. One is doing a much closer look at office hours, what students expect from them, and how they're conducted. This quarter, and in Fall 2019, the student evaluations of office hours were much lower than that for the other aspects of the course. I'd like to get more visibility into why that is. Is this because of a mismatch between what students expect from office hours and what we provide? If that mismatch exists, is there a way to provide students the support they're looking for without compromising our learning goals? Another is investigating TA training, something that normally happens somewhat organically through grading parties and other informal social side-channels. This is harder to rely on in a remote quarter and suggests that we need to put a bit more focus there in the future.

Overall, I'm impressed we were able to teach so many people so much during a pandemic quarter. The changes in format and structure have given me a lot to think about (in the best possible sense of the term), and I'm looking forward to when we're back in person. I'd love to see what we're able to keep!

2019-2020 Academic Year
Fall 2019

This quarter was motivated by the maxim that elegance is not when there's nothing left to add, but when there's nothing left to take away. I decided to take a close look at the topic coverage for the course and the problem sets with that question in mind, and the result was a version of CS103 that, I think, is quite beautiful but more focused.

The most notable change from the Fall 2018 version of CS103 was a reduction in the sizes of the problem sets. I took a hard look at the problem sets from last quarter with the goal of (1) preserving the long-running threads that made for interesting questions and (2) reducing their size as much as possible without sacrificing coverage. These goals are in tension with one another, but I think I managed to find a good compromise between them. This meant, sadly, cutting the star-drawing questions from the Fall 2018 offering. As beautiful as they were, students didn't seem to warm to them as much as I'd hoped, and their introduction expanded the topic coverage of the quarter to include number theory in a way that competed with other topics for understanding. We also lost another beautiful question, the one about the universal set, which I removed because I couldn't find a way to fit it in with everything else that needed to be covered.

What remained, however, seemed much more polished than before. In addition to the above changes, I redesigned some of the existing questions to have a more exploratory workflow. Many questions are now structured in a way that has students first engage with a new topic by playing around with concrete examples before jumping into a more abstract, proof-based setting. This seemed to go really well, leading students to have much better handles on the topics we did end up keeping. I also was able to retain the hypercube and Myhill-Nerode theorem threads, which helped tie the topics together nicely.

Another huge change to the problem sets was the introduction of first-order logic and propositional logic autograders. This led to every single logic problem on Problem Set 2 being autograded, saving a huge amount of TA time and improving student understanding. I had to restructure / change some of the older logic translation and logic negation questions, but I managed to do so in a way that preserved much of the underlying mathematical structure. (For a fun exercise, see if you can find the correspondences between the old and new problems!) More generally, I upgraded the autograder infrastructure to provide better error-reporting and to handle missing files more gracefully, something that saved a lot of frustration on the students' parts. There's still some work to do in improving autograder quality, but I think this was a huge win overall.

On the lecture side, I ended up dropping coverage of Hasse diagrams and of graph coloring. I was sad to see them go - they're wonderful concepts - but nothing really ended up depending on them and cutting them out freed up more time to explore the other concepts in more depth. Surprisingly, cutting these topics didn't mean that lectures got out early; rather, we ended up taking more time talking about the other concepts as a group, which I think led to better overall understanding of the remaining topics.

Also on the lecture side, I did a pretty sigificant overhaul of the introductory lecture on mathematical proofs, framing proofwriting in terms of a "Proof Triangle" combining definitions, intuitions, and conventions. This lecture also spent a lot of time focusing on problem-solving as a distinct activity than proofwriting, something that normally we'd lumped together. I think this went over well with students, and I'm planning on keeping this up in the future. I also made more localized improvements to the very first lecture on set theory and to the lectures on first-order logic that I'm quite happy with. Additionally, I factored out the discussion of the subset construction from our lecture on automata into its own "Guide to the Subset Construction," which decompressed the lecture and made for a nice, standalone resource for students. I was surprised to see that many of the lectures I had previously considered to be in great shape - the first lectures on automata, the second lecture on induction, etc. - now are some of the weaker lectures! I suppose that's a good thing - it means that I've improved the average quality of the lectures significantly since they got into a stable form.

I think there's still quite a lot of work to do to get things into a state of final polish. The slimmed-down problem sets included some new problems that could use some tuning and focus. The course has generally drifted more and more toward the theory side of things and away from some of the practical motivating questions that it used to explore, and perhaps it's worth resurrecting some older problem set questions that were more application-driven. The lectures are still very much in need of tuning and correction, and I imagine there are even more topics that we could eliminate from the course.

Something I'll need to work on on the future - the DFA/NFA editor and other online tools are starting to show their age a bit, in that they don't directly interface with GradeScope in the same way as the other "coding"-style assignments. It may be worth rewriting these systems so that they hook right into the GradeScope autograding infrastructure. This would make it possible for students to test and iterate, likely leading to much better outcomes.

That being said, this quarter's offering of CS103 had some of the highest reviews ever in terms of instructional quality and course organization, which means that in many ways the changes I made here were very much steps in the right direction!

2018-2019 Academic Year
Fall 2018

This quarter's offering of CS103 represents one of the largest changes to the problem sets since Fall 2015. There are too many specific changes to list here, but the major goals were to (1) integrate proofwriting more tightly into the back half of CS103, (2) to increase the number of long-running threads across the problem sets, and (3) to select problems that had "stakes" on them along the lines of my new CS106B assignments. In many ways, I think these problem sets met those goals. I introduced a chain of questions in the back half of the problem sets that culminated in a formal proof of the full version of the Myhill-Nerode theorem, tying equivalence relations, functions, and induction into the material on regular languages in ways we previously hadn't been able to make work. This was one of three long-running threads added into the problem sets - another was on star-drawing (exploring number theory), and a third was on hypercubes and symmetries (exploring group theory). I think that these problems will need some tuning to get right in future offerings, but I very much think they're a huge step in the right direction. I'm not sure how well I hit goal (3) - I'll go and update this section after getting the course evaluations to see what people thought.

I also piloted a number of changes to the introductory lectures and supporting materials. I retired the old "Guide to Proofs," which didn't seem to be as effective as the Proofwriting Checklist, and added in a new Guide to Proofs on Set Theory that seemed to dramatically improve student outcomes in that area. I also cut out our treatment of rational and irrational numbers, which had always felt a little bit shoehorned into the course. I revamped Problem Set One to introduce more practice with proof reading in addition to proof writing, which seemed to be effective, and changed the problem set grading system by adding in a square root normalization term at the end, something shamelessly stolen from MIT which Cynthia piloted last quarter.

Additionally, I just generally spruced up some of the problem sets that hadn't been touched in a while. I replaced many of the classic DFA and regex design questions with newer ones that had a bit more of a practical feel to them. I decided to remove the lengthy treatment of strict weak orders from Problem Set Three, which hadn't turned out as well as I'd hoped when I introduced it last year, and added in new questions about linear orders and group preferences. (There's still a question on strict weak orders that's much more polished and focused than last year's version; it turned out pretty well!)

On the backend, I upgraded the assignment autograders to use a newer, more robust system that can recover from crashes and timeouts in ways the previous system wasn't capable of handling. I still need to do some more work on it to pass back information about compiler errors, something that came up multiple times this quarter.

The most serious software upgrade I introduced was an interactive Problem Set Zero that guides students through an introduction to star-drawing, setting up one of the long threads for the problem sets. This seemed to go over well, and I'm going to need to think about whether it'll be possible to use this in other contexts. Could we use this to teach first-order logic? Binary relations? There's a lot of options here and I'll likely spend some time over one of the breaks playing around with this.

The other major shift in this iteration of CS103 was increasing the focus on intuition and conceptual understanding. In previous offerings of CS103, a common student concern was that they were having trouble demonstrating what they knew to be true through mathematical proofs. I decided to add in more questions that explored concepts in ways that didn't involve proofwriting, such as an exploration of permutations and permutation composition through dancers changing positions and a question on how to think about automorphisms as symmetries. The second exam also included several questions that explicitly pushed on conceptual understanding, and it seemed to be well-received. I'm going to try to keep this up in the future.

Most of the lectures this quarter were unmodified from previous quarters or represented small incremental improvements to older versions. One major change was a new in-class "MathematiCalisthenics" exercise to motivate complete induction, which seemed to go over well. Another was a revised presentation of undecidability via self-reference using self-defeating objects. I think the initial version of this updated lecture went well but could use a little bit of tuning in the future. I swapped out one of the examples in the lecture on the pigeonhole principle for an example of a little logic / math problem with a gorgeous solution; that one seemed to go over very well. The last major upgrade was a new presentation of NP-hardness and NP-completeness that I think went over much better with students than the usual one.

On the staffing side, this quarter was the first in a long while with no returning TAs, so the start of the quarter was busier than usual with extra staff training sessions. I also updated all of the internal documentation for the TAs, which turned out to be in need of some touch-ups.

At the end of this quarter I have a lot of items on my to-do lists for the future. The problem sets we gave out this quarter will need some more tuning to get into ideal shape, and I'm going to see whether I can pare them down a bit without sacrificing (much?) coverage. I'm starting to think that my treatment of P and NP is in dire need of an upgrade; given that the goal is to cover the P versus NP question in two lectures (!), I suspect that what I'm doing isn't as effective as it could be. I have some ideas about how to revisit this (including the radical idea of defining NP as "problems that reduce to SAT in polynomial time" and then introducing Cook-Levin as "a problem is in NP if and only if there's a polynomial-time verifier for it," reversing the usual ordering). In the course of researching topics for the second midterm, I came across a ton of beautiful concepts in graph theory that seem more interesting and exciting than what we're doing now - I'll need to see whether that can be worked into the main body of the class. And more generally, I'm going to take a hard look to see whether there are ways to scale back the breadth of content a bit. Removing rational and irrational numbers and offloading some of the nuances of set theory proofs into a separate handout made the lectures way less hectic and improved students outcomes; are there any other topics (looking at you, planar graphs and graph coloring) that could also be removed without doing much damage?

The more systemic areas to focus on in the future are going to be the shape of CS103A and the sorts of support resources afforded to students. Currently, CS103A functions mostly as unstructured problem-solving time. Would it be better to do problems as a group to show off a particular problem-solving modality? Right now, the CS103 problem set solutions just include polished answers. Should they also include advice about how to think about those problems, so that students see that the proof is just an artifact at the end of a line of reasoning? Right now, we have more office hours time slots than the LaIR provides for CS106A/B, and yet students feel like they can't get time of day with the TAs. Should we introduce discussion sections, or change up the format of how our office hours work? Julie had the great suggestion of asking for an extra TA to meet one-on-one regularly with students who are struggling the most - what would that look like? I don't yet have answers to these questions, and fortunately I've got some time to think them over before I jump into the next iteration of this class.

I'm going to update this once I hear back from students in their course evaluations, but from our perspective it seems as though students did much better this quarter than in previous offerings of CS103. The problem set scores and exam scores were higher on exams that I didn't think were any easier than what we've given in the past, and that's including the fact that most of the problem sets were revised from scratch. I'm going to have to dive deeper into this to see if I can figure out what exactly is causing students to do better - whatever it is, I'm going to want to keep it up!

2017-2018 Academic Year
Winter 2018

This was my first time ever co-teaching a major class! A huge thanks to Cynthia Lee for all her input and advice over the course of this quarter.

With Cynthia on board, we significantly overhauled the lectures, dropping the "Your Questions" section in favor of smaller, in-class questions taught using peer instruction. These questions seemed to be quite effective at getting students to think about the concepts we were teaching and to help them get a better sense for key terms and definitions.

In other ways, this offering of CS103 served as a refinement of the version from Fall quarter. We patched up some rough edges around the new problem sets and added in a few new problems to some of the problem sets to address weaknesses we'd seen on the final exam from Fall 2017. We introduced some new materials (a "Guide to Proofs on Discrete Structures" and a "Discrete Structures Proofwriting Checklist," which have been badly needed for a long time). We also made some changes to our grading system, the first time we've done so in a while, and I'm quite happy with how they turned out.

Another visible change is the addition of more structured hints and advice throughout the problem sets. On many problems, especially at the start of the quarter, we provided students hints that, rather than just telling students how to crack each problem, talked about a general mentality or skill the student could use to approach similar problems in the future. We were quite happy with the results!

Co-teaching this class gave me the opportunity to sit in the audience and watch this class be taught for the first time, and in my conversations with Cynthia we came up with some beautiful new intuitions and insights that I hope we feed back into the course in future offerings. One topic I'm particularly excited to explore is the idea of framing self-reference proofs in terms of self-defeating objects: a decider for the halting problem can't exist because it can be used against itself, just as there can't be a largest integer because it could be used to show that there's an even larger integer.

Although I'm very happy with how the in-class questions worked out, I'm a little bummed that we had to cut the popular "Your Questions" sections from each class in order to make it work. Given how much material there is packed into this course I'm not sure there's a way to get the two to work side-by-side, but this is something I'd like to try to address for future quarters.

Fall 2017

This quarter's offering of CS103 introduced a number of pretty significant changes. The most visible was the inclusion of new programming assignments as part of PS1 - PS5. These coding assignments were designed to get students to take the higher-level mathematical constructs we've learned and to express them precisely in code in a way that the computer could understand. For example, on Problem Set 1, the assignment asks students to write code for predicates on sets to check whether, say, S is a subset of T, whether S = {T}, etc. These assignments were popular with students and we saw some positive results. We didn't, for example, see anyone mix up element-of and subset-of after the first problem set!

Independently of the new coding assignments, I also made some pretty serious changes to the early problem sets, especially Problem Set Three, by swapping out some of the more abstract problems for questions about how strict orders and equivalence relations make an appearance in the C++ standard library.

The other major area I made changes this quarter was in the messaging around grades and progress. Students are always a little shocked at the grades they get on the first few problem sets of the quarter, since that's when they're still getting used to writing proofs. I decided to change the lectures to spend much more time talking about the grade distribution and to frame the message as "there's always room for improvement" and "you are on an upward trajectory." I similarly sent out custom emails to each student around the first and second midterm congratulating people who had done well and reassuring people who didn't perform as well as they'd liked. I also made sure to call attention to the progress students had made over the course of the quarter.

2016-2017 Academic Year
Spring 2017

This quarter's iteration of CS103 build on the version from Fall quarter. Thanks to the two buffer days in that syllabus, for the very first time I didn't need to cut out two full lectures' worth of material when putting this class together!

I decided to revert back to 180-minute midterms instead of in-class midterms, which I think worked much better. I'll continue that trend for the foreseeable future.

The major focus for this quarter was improving the quality of the grading feedback and generally pushing proofwriting much more than in previous quarters. The results were impressive: the quality of the work students submitted on the midterms and final exams were vastly superior to what we'd seen in previous quarters. I think the median student from this quarter is much better prepared to continue on to classes like CS161. Part of this was by more precisely articulating proofwriting style through a number of "proofwriting checklists" we distributed throughout the quarter, and part of this was by making the grading on the problem sets put a greater focus on proofwriting style.

There are a number of tweaks and updates that need to be made to this grading system, as is the case for any major overhaul. In the future we'll allocate more points toward intuition, and we'll try to be more aggressive about signaling why this level of rigor is so important. However, based on the results we're getting, I think that this new approach is far more effective at getting students to where they need to be.

I also made a number of more minor edits to the course, such as souping up the proofwriting on some of the earlier problem sets, recasting some of the older problem set questions to make them more applied, and cleaning up some of the lectures (especially on the Pigeonhole Principle). I think that trying to put in a proof of Sperner's lemma in the 2D case may have been a bit too ambitious, but other changes like exploring edge cases of self-reference and switching our proofs in computability to explicitly reference real program code were definite steps in the right direction.

Fall 2016

This quarter's offering of CS103 continued along the trajectory set in Fall 2015. I did more tuning and cleanup of the presentation of core concepts, ultimately devising a 28-lecture syllabus (with two lectures off on exam days) that will be easier to pick up and run with in future quarters. I decided to eliminate exploring non-RE languages via self-reference, primarily because the level of additional complexity required for introducing these concepts wasn't really worth the payoff. I also made some major adjustments to the presentation of the universal Turing machine, motivating it by starting with a computer-based TM simulator and thinking about how to recast it as a TM-based TM simulator and tying it in to virtualization. I was extremely happy with the presentation of undecidability, and really only think there are a few more bugs to work out, most of which are best accomplished in the problem sets (for example, requiring students to see why self-reference can't prove that every language is undecidable).

I made a number of pretty significant changes to the problem sets, ramping up the amount of proofs required and being more intentional with the topics covered. I think that I probably made them a bit too long, and will likely back that off a bit the next time I teach the class, but I did really enjoy working more advanced results and applications of the material into them.

I also introduced a number of new animated guides, including a Guide to Logic Negations, a Guide to First-Order Translations, a Guide to Elements and Subsets, and a Guide to Cantor's Theorem, most of which I built out over the summer. They seemed to work really well, and I'm hoping to soon take a crack at writing a Guide to Proofs on First-Order Logic. I'm also hoping to rewrite the existing (textual) Guide to Proofs to offer some more concrete details for students to focus on.

As an experiment, I tried having in-class midterms (80 minutes each). I think that this generally didn't go as well as the three-hour out-of-class midterms, though they were logistically much easier to administer. Next quarter, I'll likely back off from this idea, but I'm definitely open to revisiting it.

And of course, there were a ton of minor little tweaks scattered throughout the quarter. I tuned up the presentation on graphs and the pigeonhole principle and reworked the structure of the lectures on binary relations. I revisited the way I introduced the verifier-to-TM construction to make it more visual and added in more concrete visualizations of how to evaluate first-order logic formulas. I'm really happy with the slides and hope to continue using them in the future.

Making these changes took a lot of work, and I was definitely exhausted by the end of the quarter. But it seemed to pay off spectacularly - the course evaluations were higher than they've ever been. In Spring of this year I'm going to have a lot of tuning and updates to do, as always, but I'm optimistic about it.

2015-2016 Academic Year
Winter 2016
View the Winter 2016 CS103 website.

This offering of CS103 - which, I think, was the most polished the class has been to date - was essentially the Fall 2015 version with a ton of cleanup and two whole lectures (!!) of content removed to accommodate the Winter schedule. With the lessons learned from Fall 2015, I managed to significantly tighten up the presentation of TMs (scaling down the intellectual burden involved with subroutines, for example) and come up with a much better lecture on complete induction (thanks, trees!). I've also made a ton of notes to myself with areas for future improvement, mostly with regards to moving motivations earlier into lecture and trying to take some of the more theory-heavy topics and replace them with more practical applications.

This quarter, I introduced the Guide to Self-Reference and Guide to the Lava Diagram, two animated slide decks walking through the intuition for two of the trickiest topics from the quarter. They were very well-received and I'm extremely proud of how well they turned out. I think they really worked - people did a lot better on the final exam questions on computability than they've historically done in the past. I'm hoping to expand these guides throughout the quarter and rework some of the supporting handouts from throughout the quarter to use this new format.

There's still a lot of work to be done to get this class into a good working order. I need to figure out a better way to design exams that communicate our expectations. I need to find some way to integrate more of the topics from the discrete math section into the computability/complexity topics. I also need to find a way to cut even more material, since we're still totally overloaded with material. That said, this version of the class got some of the highest reviews CS103 has ever received, so there's still a lot to be really happy about!

Fall 2015
View the Fall 2015 CS103 website.

This offering of CS103 started with a pretty serious overhaul of the first half of the class. I moved first-order logic to the second week and gave an entire problem set pretty much devoted to first-order and propositional logic. Using that as a starting point, I increased the treatment of discrete mathematics and spent a lot more time in class going over how to use first-order logic as a tool for reasoning about proofs. I added in a new problem set about graphs, and pushed induction to the end of the section on discrete structures rather than the beginning. The net result seems to be that people got a much better treatment of discrete math and had much better outcomes with proofwriting and formal definitions.

To fit all this in, I used the two extra lectures that come with a fall quarter class. I also removed nondeterministic TMs from the CS103 lineup, which freed up about half a lecture from the section on Turing machines and another half lecture from the portion on P and NP. I also lowered my expectations about student outcomes with P versus NP, since they'll get a more in-depth treatment of the subject in CS161 (and CS154, if they choose to take it).

I also for the first time introduced a series of challenge problems throughout the quarter for students interested in more of a challenge. I'm really happy with the first set of challenge problems (which lead up to a proof about the chromatic number of the rational plane) and pretty happy with the other two. I hope to keep using them in the future.

As promised, I also introduced a ton of new handouts. It's unclear how effective they were, though I did find an unusual trend in the grades: freshman in CS103 this quarter did substantially better than every other group. I'm wondering if this is because the freshmen were actually going through all the materials, or whether this is due to other effects.

The overhaul required a ton of effort, and I'm really hoping that next quarter will be more relaxing. I know that I need to tune up my presentation of TM subroutines (which, right now, are terrifying), my coverage of complete induction (I still have never really gotten that one to work right), and my coverage of P versus NP (which right now, I think, is a bit too high level). I also will need to figure out how to cut a lecture for winter quarter (one bonus lecture on Eulerian graphs is easy to eliminate). I've taken tons of notes on my materials for next time, so hopefully I'm closing in on a stable, final version of this class!

2014-2015 Academic Year
Spring 2015

I was very pleased with how this iteration of CS103 worked out. I ended up continuing to use the recursion theorem as a primary mechanism for showing undecidability and unrecognizability, but significantly overhauled the presentation to focus more on actual source code than on high-level descriptions. This cleaned up much of the presentation and made it much easier for students to see how these constructions worked. I also reordered the treatment of verifiers and NTMs to put verifiers earlier, then heavily push verifiers as a mechanism for understanding RE. This made a lot of the proofs easier to understand and replaced complicated nuances of recognizers for simpler (though still tough) notions of verification.

This quarter also marked a return to an older model where I covered first-order logic prior to discussing graphs, binary relations, and functions, which I believe helped out quite a bit. There is still a lot more work to be done integrating first-order logic into the discrete structures porition of the course - specifically, I need to do a much better job talking about how to write proofs that call back to formal definitions - but I think it made it easier to define complex concepts more precisely.

Finally, this quarter marked the first iteration of CS103A, an add-on course providing extra review and problem-solving practice. That class went pretty well for a first offering, and I'm hoping to tune it a lot more precisely in the future.

Next time I teach this course, I'm going to improve in a few specific areas. First, I need to do a better job teaching students how to manipulate first-order definitions and how to use them to guide the basic structure of mathematical proofs. That's going to be a bit tough, especially given that we're already packed to the gills with material. Second, I need to update my presentation of P and NP, probably by cutting out the NTM intuition for NP and focusing more on reducibility and verification. Finally, I need to provide more support materials along the way, either by fleshing out the course reader, building a more supplementary handouts, or some combinatino of both. That said, this was, in my opinion, easily the best offering of CS103 to date, and I'm quite happy with how the class looks now.

Fall 2014

This was my sixth offering of CS103. I ended up making a lot of changes since the prior offering. Students were allowed to work on problem sets in pairs and triples, which helped students get more practice writing proofs and significantly cut down on the TA grading load. I started offering in-person practice midterms, leading to much better exam performances overall. I also changed the discussion of computability quite significantly by focusing on the recursion theorem as a method for showing impossibility results and exploring equivalent definitions of the class RE through verifiers. I think that I'm on to something with this setup but that it needs a bit more tuning, so we'll see how it goes next time around. (Oh, and this class broke the CS103 size record by about 80 students!)

2013-2014 Academic Year
Fall 2013

This fifth offering of CS103 broke the size record by quite a large margin (333 students!). I made a lot of changes to the course that really seemed to pay off. I updated the treatment of discrete math to stress discrete structures more than in past quarters and, accordingly, revised some of the problem sets. I also replaced the pumping lemma with the Myhill-Nerode theorem, which made for a much more intuitive presentation of nonregular languages and led to some much cooler problem set questions. I revised all the discussion problems to be more relevant, gave out more practice problems, and dedicated a few minutes of lecture time each day to answering questions students asked via Google Moderator. CS103 got its highest evaluations ever and was one of the highest-rated courses in the entire School of Engineering, and I'm extremely happy with how it turned out. That said, I've got a lot of ideas I want to test out next time, so I'm looking forward to the next time I get to teach it!

  • Total students enrolled: 333
  • Lecture Topics:
    • Week 1: Set Theory, Cantor's Theorem, Proof Techniques
    • Week 2: Induction, Graphs
    • Week 3: Graphs, Relations, Functions
    • Week 4: Cardinality, Logic
    • Week 5: Logic, DFAs, NFAs
    • Week 6: Regular Expressions, Myhill-Nerode Theorem, CFGs
    • Week 7: Turing Machines, The Church-Turing Thesis
    • Week 8: Unrecognizablility, Undecidability, co-Recognizability
    • Week 9: Mapping Reductions, P, NP
    • Week 10: P vs. NP, NP-Completeness, The Big Picture
2012-2013 Academic Year
Winter 2012-2013

This fourth offering of CS103 was the largest offering I'd taught so far (with 265 students!) I was extremely happy with how the material worked out this time - the coverage of Turing machines was a lot more focused, and more generally the intuitions seemed to shine through marvelously. We also added in a new Regular Expression design tool (thanks, Remington!) that made it possible to test regular expressions in more depth. I am very much looking forward to the next iteration of CS103, when I think that I'll finally have all the bugs ironed out!

  • Total students enrolled: 265
  • Lecture Topics:
    • Week 1: Set Theory, Cantor's Theorem, Proof Techniques
    • Week 2: Induction, Graphs, Relations
    • Week 3: Functions, Cardinality, Diagonalization, the Pigeonhole Principle
    • Week 4: Propositional Logic, First-Order Logic, SAT Solving
    • Week 5: DFAs, NFAs, and Regular Expressions
    • Week 6: The Pumping Lemma, CFGs, PDAs
    • Week 7: Turing Machines, The Church-Turing Thesis
    • Week 8: Unrecognizablility, Undecidability, co-Recognizability
    • Week 9: Mapping Reductions, P, NP
    • Week 10: P vs. NP, NP-Completeness, The Big Picture
Fall 2012

This was the third time I taught CS103. This quarter marked the introduction of the Theorem and Defintion reference, a set of course notes for the first few weeks, and an automatic grade reporting tool. I'm very happy with how the quarter turned out, and I'm really psyched to build off of it for the upcoming Winter quarter!

  • Total students enrolled: 185
  • Lecture Topics:
    • Week 1: Set Theory, Cantor's Theorem, Proof Techniques
    • Week 2: Induction, Graphs, Equivalence Relations
    • Week 3: Order Relations, Functions, Cardinality, the Pigeonhole Principle
    • Week 4: Propositional Logic, First-Order Logic, SAT Solving
    • Week 5: DFAs and NFAs
    • Week 6: Regular Expressions, The Pumping Lemma, CFGs, PDAs
    • Week 7: Turing Machines, Programming Turing Machines, R and RE Languages
    • Week 8: Unsolvable Problems, Mapping Reductions, co-RE
    • Week 9: P, NP, NP-Completeness
    • Week 10: NP-Completeness, Approximation Algorithms, Cryptography, The Big Picture
2011-2012 Academic Year
Spring 2012

The second offering of CS103 was a lot more polished than the first run. I wrote a set of course notes for the first week of the course, and significantly tightened up the treatment of proof techniques and reductions. I am very happy with how this offering of the class turned out, and I'm very excited to see how well it will go when I teach it this Fall!

  • Total students enrolled: 181
  • Lecture Topics:
    • Week 1: Set Theory, Cantor's Theorem, Direct Proofs, Indirect Proofs
    • Week 2: Induction, Graphs, Relations
    • Week 3: Functions, the Pigeonhole Principle, Propositional Logic
    • Week 4: First-Order Logic, SAT Solving, DFAs
    • Week 5: NFAs, Regular Expressions, the Pumping Lemma
    • Week 6: Context-Free Grammars, Pushdown Automata, Turing Machines
    • Week 7: The Church-Turing Thesis, Universal TMs, Unsolvable Problems
    • Week 8: Reductions, P
    • Week 9: NP, NP-Completeness
    • Week 10: Approximation Algorithms, The Big Picture
Fall 2011

This was a quarter of firsts - my first quarter as a full-time lecturer at Stanford, my first time teaching CS103, and the first time teaching a class with over 200 students. I had a great time teaching and while the class had a few rough edges (as expected for a first iteration), I'm really psyched to teach it again.

  • Total students enrolled: 213
  • Lecture Topics:
    • Week 1: Set Theory, Cantor's Theorem, Graphs, Relations, Direct Proofs
    • Week 2: Proofs by Contradiction and Contrapositive, Induction
    • Week 3: Double Induction, Strong Induction, Propositional Logic
    • Week 4: First-Order Logic, Functions, the Pigeonhole Principle, Trees
    • Week 5: DFAs and NFAs
    • Week 6: NFAs, Regular Expressions, the Pumping Lemma
    • Week 7: Context-Free Grammars, Pushdown Automata, the Pumping Lemma for CFLs
    • Week 8: Turing Machines, the Church-Turing Thesis, the Halting Problem, Properties of RE Languages
    • Week 9: Reductions, Rice's Theorem, P, NP
    • Week 10: NP-Completeness, Approximation Algorithms