Educational Videos

I have four main video series, listed on this page: Principles of Microeconomics Full Lectures and Short Videos, Data Communications, Econometrics, Introduction to R for Economists, Introduction to Causality, and Advanced Stata Tips. I also have a few stray additional videos, which you can see here, in addition to a few more stragglers which you can see by going directly to my YouTube channel. Feel free to subscribe; who knows when I’ll get the itch to make another series!

This series of videos is meant to accompany my Principles of Microeconomics (ECON 201) class. I also make it available to my Seattle University class but it doesn’t line up perfectly! In any case, you can use it as a free principles of micro course! You can see these videos on a YouTube playlist as well here.

This lecture is equivalent to the first day lecture in ECON 201. Here we cover the basics of incentives, and a bevy of important vocabulary terms.

This lecture is equivalent to the second day lecture in ECON 201. Here we cover comparative advantage, and the tricky problem of how to most efficiently use the resources at hand.

This lecture is equivalent to the first Supply and Demand lecture in ECON 201. Here we cover the basics of the supply and demand model - how it works, how it explains price and quantity, and how it describes markets as working.

This lecture is equivalent to the second Supply and Demand lecture in ECON 201. Here we cover how the supply and demand model MOVES AROUND! No point to the model if you don’t move it around.

This lecture is equivalent to the first post-Midterm 1 lecture in ECON 201. Here we cover the ONE GOLDEN RULE to explain all of economics! Marginal thinking ahead…

This lecture is equivalent to the second post-Midterm 1 lecture in ECON 201. Here we cover the the cost structure for a firm, and see how firms in competitive markets choose quantities that maximize their profits.

This lecture is equivalent to the Elasticity lecture in ECON 201. Here we cover the the cost structure for a firm, and see how firms in competitive markets choose quantities that maximize their profits.

This lecture is equivalent to the Efficiency lecture in ECON 201. Here we cover the allocation of goods, and how different ways of allocating goods leads to resources being used more or less efficiently. We also talk a lot about competitive markets, and snow shovels.

This lecture is equivalent to the Market Failure lecture in ECON 201. Here we cover the times when competitive markets will NOT lead to efficient results. We cover externalities, as well as goods that may not be rival in consumption, or may not be excludable.

This lecture is equivalent to the first Pricing Power lecture in ECON 201. Here we cover what happens when competition is limited in some way - pricing power! We cover the causes of pricing power, how a monopoly maximizes profit, and what to do about monopolies.

This lecture is equivalent to the second Pricing Power lecture in ECON 201. Here we cover market structures aside from competitive and monopolistic markets - oligopolies and monopolistic competition. We also look at how firms with pricing power use price discrimination.

This lecture is equivalent to both game theory lectures in ECON 201. Here we cover the basics of game theory, including sequential and simultaneous games, and the problems with commitment that both of them lead us to!

This lecture is equivalent to the Partial Information lecture in ECON 201. Here we cover the times when we don’t know the full consequences of our actions! We cover expected utility, risk, search, and information-providing firms.

This lecture is equivalent to the Asymmetric Information lecture in ECON 201. Here we cover the times when one person in a transaction knows more than someone else. Lemons ahoy!

This lecture is equivalent to the first Labor Economics lecture in ECON 201. Here we cover the basics of what is essentially the biggest market around! What goes into labor supply and labor demand? And why do different people earn different wages?

This lecture is equivalent to the second Labor Economics lecture (Wage Differentials) in ECON 201. Here we cover reasons why different people earn different wages, other than simply being differently productive! We cover compensating differentials, labor unions, and discrimination.

This series of videos is meant to accompany my Principles of Microeconomics (ECON 2110) class at Seattle University. You can see these videos on a YouTube playlist as well here.

This video covers how economists focus on decision-making due to incentives, the concept of costs (including opportunity costs) and benefits, and the calculation of economic surplus.

This video covers the very basics of supply and demand. What do these curves mean, why do we care about them, and where the heck does that requilibrium come from?

This video covers how supply and demand curves shift, and why. What are the determinants of demand and supply curves that change to make them shift, and what happens to the equilibrium as a result?

This video covers the basics behind the concept of elasticity of supply and demand. What does elasticity mean, and what makes a particular good have elastic or inelastic supply and/or demand?

This video covers the golden rule of microeconomics: Marginal Benefit = Marginal Cost. We cover what each of these things mean, why we want to set them equal, and how it relates to overall efficiency in the economy.

This video introduces how to solve for optimal production for a firm in a competitive market, providing some easy-to-follow steps for working with this model both algebraically and graphically.

This video explains how firms in competitive markets with free entry and exit take advantage of that entry and exit! Firms will switch industries and markets in the long run, chasing profit but eventually driving it to zero.

This video describes the concept of positive and negative externalities - ways in which people are hurt or helped by markets they have nothing to do with. This can cause market failure because those external costs and benefits aren’t internalized when making purchasing and production decisions. I also cover how to find the efficient and market quantities, and to graph externalities.

This video covers the concepts of goods that are rival in consumption (or not) and excludable (or not), and how that relates to the ability to produce them efficiently in a market. I cover private goods, commons goods, artificially scarce goods, and public goods.

This video covers the concept of pricing power as well as how to find a profit-maximizing quantity and price for a monopoly.

This video covers the concept of price discrimination - charging different people different prices for the same thing based on who they are / what you think their marginal benefit is.

This video covers the concept of a Nash equilibrium and best responses, as well as how to find the Nash equilibrium of a sequential game on a game table by looking for best responses.

This video covers how to find the Nash equilibrium of a sequential (turn-based) game, and how difficulty in committing to actions can break cooperation apart!

This video covers the basics of asymmetric information, how the presence of asymmetric information in a market can lead to it unraveling, and how signaling can partially fix the problem by making it expensive to lie.

This series of videos is meant to accompany my Data Communications class at Seattle University. It covers the concepts behind data visualization and communications as well as skills in Excel, R (ggplot2), and Tableau. You can see these videos on a YouTube playlist as well here.

This is an introduction to a series of videos on data communications, covering what the goal of data viz is and how we can think about our role as data communicators.

This video covers principles of visual perception and how they can be used to reduce clutter and improve focus in a data visualization.

This video, part of a series on data communication, discusses the kinds of attributes you can use to encourage people to focus in on a particular part of your data. I also walk through an example of modifying a graph to improve focus.

This video covers the basics of putting together a graph in Excel. Clearly the worst video in the series, I’m sorry.

This video provides an intro to the grammar of graphics and the associated R package ggplot2. I cover how data, aesthetics, and geometries combine with scales, calculations, and coordinates to produce amazing graphics.

This video goes through how you can manipulate the way that data is processed, scaled, and transformed for presentation on a ggplot2 graph.

This video briefly covers how to work with axis and legend titles as well as do some other neat stuff with the legend (including, uh, get rid of it)

This video covers how to get multiple geometries on the same set of axes, how to facet your graph, and how to get multiple unrelated graphs all together.

This video covers the use of aesthetic characteristics for decoration of geometries, as well as the use of the theme() function and preset themes.

Before you can visualize the data you need to get it ready! How can we work on importing and preparing data so that it’s in the form we need. This is less about specific tools and more about the concepts and goals of preprocessing.

Notebooks are one of the most common ways that analysis files are shared these days. In R, these generally come in the form of RMarkdown notebooks. How can we create and use them?

The next few videos will go over specific visualization types, talking about what makes them work and where they often go wrong. In this video we’ll be focusing on line graphs.

This continues the series on specific visualizations. This one goes over scatterplots, one of the easiest graph types to get wrong, despite how important and useful they are!

Still a visualization of sorts! And definitely indispensible. How can you make a clear table that tells a story when all you have to work with is a table full of numbers?

Tableau is a piece of premier visualizations software commonly used for making dashboards, workbooks, and visualizations. It has a lot of powerful tools. How can we get in there and use those tools?

Now that we have some visualizations in Tableau, an important aspect is making them look good and work with our principles of visualization. This video will go over how to do that.

We see them everywhere nowadays and for good reason - they’re useful! Data dashboards are a good way of looking at a data set or issue from many different angles. This video will go into the concepts behind dashboards and what our goals will be as we start to make them.

Tableau is tailor-made for dashboards, so making a dashboard in it isn’t all that much harder than just making a visualization. You just take the visualizations and… put them in a dashboard. Still, there are some difficulties to overcome.

What if we want the power and flexibility of R but linked to a dashboard? Flexdashboard to the rescue! Easy to use, it combines what we know about ggplot with what we know about Rmarkdown (and tosses in some nifty htmlwidgets).

Of course, what fun is a dashboard if you can’t interact with it? This video introduces the use of Shiny inside of Flexdashboard. Shiny gives you all sorts of interactivity opportunities, which you can use to give your users much more control over what part of the data they want to look at, or what they want to do with it.

This series of videos is meant to accompany my Applied Econometrics class at Seattle University. It covers the concepts behind inference and identification and the many, many ways we can approach those issues. You can see these videos on a YouTube playlist as well here.

This video introduces the class and talks about some of the things that econometrics tries to do, how it’s different from other fields, and what problems we’ll be facing in the class.

The two main problems that we’ll be running up against as econometricians over and over again are inference error and identification error. What are they, why are they problems, and what can we do?

In this course we’ll be using the R programming language to perform our estimations. This video introduces the concepts of R, how we can get started, and how to work our way around Rstudio.

This videos through how to “think like R” - how does it work, what are the necessary objects underlying what we’re doing, and what does it actually do? This will help us work with R commands much more easily.

One important part of working with data is, well, working with data. Data often doesn’t contain all the stuff we need - we’ve got to do some cleaning, or variable creation. In this class we’ll be doing that with the dplyr package, which is part of the tidyverse.

Much of the work in econometrics works with regression, which is a method for fitting a line to some data in order to understand the relationships in that data and make predictions. Here we start with the basics of it.

The point of a regression (and much of multivariate statistical analysis) is to use some variables to explain the variation in another variable. How does that work, and what can we do with the part we CAN’T explain?

Identification error creeps on in to ruin our fun. When won’t ordinary least squares give us the causal effect we want, how do we know, and what can we do about it?

We’ll be getting into a command that we’ll come back to over and over again in R - lm(), for linear model. Let’s get started working in R for real by doing regressions in it.

This is part of a pair of videos going over the concept of sampling variation, and how it relates to the uncertainty that we have for our estimates. Given that we are uncertain about what our estimates should be, what can we even learn about them?

This is part of a pair of videos going over the concept of sampling variation, and how it relates to the uncertainty that we have for our estimates. Given that we are uncertain about what our estimates should be, what can we even learn about them?

Now we have the idea that we can test whether certain null hypotheses are very unlikely to be true. But beyond concepts we have to actually do the thing! How can we do hypothesis testing in R?

We know all about identification error. So how can we solve it? One way is to use control variables. What are they, what do they do, and how exactly do they fix the problem anyway?

One unfortunate feature of statistics is that ou often can’t get much of anywhere without a model of the real world. And those are hard to come up with! But they’re not hard to work with once we have them. Causal diagrams are an easy way of representing the real world that help us figure out what we need to control for.

We’ve done linear regression in R. We’ve done multivariate regression in theory. We’ve talked about control variables. How can we run a multivariate regression in R and perform hypothesis tests on it?

A lot of data in economics, social science, business, you name it, isn’t continuous! It’s discrete. Categorical data, binary data. What can we do with these sorts of variables and how do they work in regression?

Not all lines are straight! How can we work with data that has nonlinear relationships in it? One thing we can do is to introduce transformations of the variables. The two most common are polynomial transformations and logarithmic transformations.

They pop up over and over and over - interaction terms! These terms allow the effect of one variable to differ depending on the value of a different variable. So how do they work and how can we interpret them?

We want to control for stuff, but that’s hard! A lot of the things we want to control for we can’t measure. So what can we do? One thing we can do is control for a lot of stuff at once! Enter within variation and fixed effects…

The most common research design in modern econometrics. This takes the idea of within variation and pushes it one step farther, bringing in a control group so as to isoalte the effect of something that went into effect at a given time.

This video is a part of the Experiments module in the class. This particular video goes through some of the downsides of randomized experiments, or at least things to keep in mind when doing them.

This video is a part of the Experiments module in the class. This video goes through the concept of the power analysis, a test we can do before starting our experiment to see what sort of sample size we need.

This video is a part of the Experiments module in the class. One thing we want to look for, across a few dimensions, in an experiment is balance. How can we do balance tests for a bunch of different variables at once? Enter the balance table.

This video is a part of the Regression Discontinuity module in the class. Regression discontinuity is a research design that we can use whenever we have a treatment that is applied based on being above/below a certain cutoff value. How does it work?

This video is a part of the Regression Discontinuity module in the class. Regression discontinuity is great, but surely it has its own problems? Yep! What kinds of problems are those, and what assumptions does it force us to make?

This video is a part of the Instrumental Variables module in the class. Instrumental variables is a research design we can use when our treatment variable has some sources of variation that are endogenous, and other sources of variation that are exogenous. How does it work and what does it do?

This video is a part of the Instrumental Variables module in the class. Instrumental variables is, if you ask me, darn cool. But of course it has its problems! What sorts of assumptions do we need to make to do IV, and what problems do we run into?

This video is a part of the Instrumental Variables module in the class. Now that we’ve learned about what IV is and what roadblocks to avoid, how can we actually estimate the thing? This video goes over how to do so, and how to get standard diagnostics, in R.

This video is a part of the Binary Dependent Variables module in the class. We know that many of the assumptions in OLS are unlikely to be literally true. But we usually ignore it, thinking it’s not a huge issue. But in the common case where the dependent variable is binary, the problems can get too big to ignore! This video goes over why, and what we can do (probit and logit) in its place.

This video is a part of the Binary Dependent Variables module in the class. One downside of probit and logit is that its coefficients are difficult to interpret. This video goes into the concept and practice (mathematically and in R) of turning these results back into probabilistic statements we can easily understand.

This series of videos is aimed at the undergraduate Econ crowd as well as any other economists getting started in R. It’s aimed at explaining the R software package through the lens of the kinds of things that economists tend to do. The videos are divided into Basics, Moderate, and Advanced. You can see these videos on a YouTube playlist as well here. **Note that if you click through to the YouTube video, the code shown in each video is available through a link in the description.** Or, you can simply download a ZIP file with all the code here.

BASIC R VIDEOS

In this video, I cover how to install R and RStudio, and how to work your way around RStudio. Just settling in to the whole thing. Soon we’ll be up and running! This video doesn’t cover rstudio.cloud which can also be handy.

In this video, I cover how to use the documentation in R. I cover the help search bar, the help() function, ??, and searching for R help on the internet.

In this video I cover the basics of working with variables, creating new ones, using the is. and as. functions, and manipulating variables.

This video will cover the basics of using vectors and matrices in R, from creating them with c() and cbind() to exploring them with [] and proper indexing. Not to mention using vectors inside vectors!

This video offers an introduction to data frames, showing how to create them out of matrices using as.data.frame (although skipping over how to create them from scratch using data.frame() - check it out!), how to select variables using $, and how to look at the data using head().

This video expands on the introduction to data frames, showing how to manipulate variables using $, and how to choose a subset of the rows and columns of the data using the subset() command.

This video will cover how to install (with install.packages() or the Packages tab) and load (with library() or the Packages tab) packages in R. The particular package we’ll be loading is “foreign”, which contains the read.dta() function we’ll need to load in data for the next few videos!

This video will cover how to calculate basic statistics, like mean and standard deviation, with a single variable, or how to look at all the levels of a categorical or discrete variable with table(). It will also show how to create summary statistics tables with summary() and stargazer().

This video will cover how to calculate summary statistics of two variables, including correlations with cor(), cross-tabulations with table(), comparing the means of two variables with t.test(), and calculating summary statistics by group with aggregate().

This video covers the basics of plotting in R, using the commands plot() for scatterplots, hist() for histograms, plot(density()) for kernel densities, barplot() for bar graphs, and plot(,type=“l”) with aggregate() for line graphs.

MODERATE R VIDEOS

This video will cover how to include something in your regression commands other than just a list of controls! These tips apply to all kinds of regression, too, not just OLS. These include square and threshold terms with I(), logs with log(), interaction terms with * or :, and sets of dummies with factor().

In this video we cover what to do once you’ve already run your regression! We pull out the predicted values and residuals with predict() and residuals(), we use Breusch-Pagan (bptest()) to check for heteroskedasticity, we calculate heteroskedasticity- or cluster-robust standard errors with coeftest() in the sandwich package, and we perform F-tests of regression coefficients with linearHypothesis() in the AER package. These commands and tests work with all kinds of regression commands, not just OLS (lm()).

In this video we cover how to make plots of your regression after you’ve performed it. We produce residual plots, plots of fitted values, and we overlay the regression line on top of a scatterplot.

In this video we cover how to perform an instrumental variables regression using ivreg() from the AER package. We also cover how to estimate the first stage separately, and run an F-test on the instruments using linearHypothesis().

In this video we cover the basics of time series analysis, from making time series objects with ts(), loading them in with data(), plotting them with plot(), plotting autocorrelation and partial autocorrelation functions with acf() and pacf(), performing Dickey-Fuller with adf.test(), and some seasonality with stl() and seasonplot(). Lots of stuff!

In this video we cover how to use R’s ARIMA command arima(), which also covers ARMA models. I also show how to use these models to forecast the future using forecast() from the forecast package. That’s about it!

In this video I cover some basic limited dependent variable models, in particular how to do probit and logit with glm() and then get their marginal effects with the mfx package. I even go over ordered logit and ordered probit with polr() and their marginal effects with ocME().

In this video I cover how to perform a Tobit regression with censReg() and get its marginal effects with margEff(). We also cover how to perform a sample selection regression, or Heckman model, with selection().

In this video, I cover the basics of panel data using library(plm), pdata.frame()s, and performing fixed effects, random effects, and first-difference regressions with plm(), as well as the Hausman test (phtest()).

ADVANCED R VIDEOS

In this video, I cover how to create your own random data (using rnorm() and similar commands) in order to perform a simulation that will allow you to test how much of a problem it is if your assumptions fail! The only other new command here is for loops, but I cover a few advanced ways of putting it all together.

In this video, I provide an introduction to the tidyverse library, which is a great way of handling data manipulation. I address the better-than-data-frames tibbles and as.tibble(), the better-than-head() glimpse(), and the better-than-read.dta read_dta in the haven package.

In this video I cover how to reshape your data from wide to long format using gather(), and then how to join/merge two data sets together using the various join commands in tidyverse. I also cover the concept of the observation level, very important when joining!

In this video I cover the basics of using dplyr, a package for manipulating data. In this video I cover how dplyr is structured, and how you can use mutate() to create variables, filter() and select() to subset rows and columns, arrange() to sort the data, and rename() to rename variables.

In this video I will cover how to use the pipe operator %>% in dplyr to chain together multiple operations at once. Highly useful!

In this video I will cover how to use group_by() in dplyr, which will prove highly useful in combination with mutate() and summarize().

In this video I go over a very basic introduction to the ggplot function from the ggplot2 package (from, of course, the tidyverse). I illustrate how to put together a ggplot command by combining a data set, an aes()thetic, and a geometry (geom_point()) in the example). Check out the ggplot2 reference manual here!

In this video we go further into ggplot, detailing some of the geometries you’re most likely to use in your graphcs - geom_point() for scatterplots, geom_bar() for bar graphs, geom_histogram() for histograms, geom_density() and stat_density() for density plots, and geom_line() for line graphs.

In this video we go further into ggplot, detailing how to overlay graphs on top of each other, with examples on putting a regression line (geom_smooth) over a scatterplot (geom_point) or putting straight lines (geom_hline and geom_vline) over scatterplots and density (geom_density) plots. I also go over how to draw graphs separately by group by including a color, shape, or linetype aspect in the aes()thetic.

The final video! In this video I go over how to put labels and titles on your ggplot()s. I go over axis labels with xlab() and ylab() as well as whole-graph titles with ggtitle(). Finally, I cover how to label your legends as desired, either by carefully labeling your factor variables, or with scale_color_manual() and/or labs().

BONUS R VIDEOS

This video is a brief introduction to the vtable package, which I wrote. The function vtable() makes it quick and easy to look at information about your variables while you continue to work on your data, without repeatedly calling head(), glimpse(), etc., or to automatically create variable documentation files to share with others.

This video covers the estimatr package, which is a very useful package for running regressions that works in the ways economists expect them to! It does linear and instrumental-variables regression, with options for fixed effects, clustering, and robust standard errors.

This video is from a workshop held at Seattle University that went into how to use the tidyverse to work with data, and how to think about data cleaning and wrangling and pull it off in the tidyverse.

data.table is the fastest and most efficient way of handling data in R, and really in pretty much any package, including Python or Julia for most tasks. This video is from a workshop held at Seattle University that went into how to use data.table to work with data, and how to think about data cleaning and wrangling and pull it off in data.table.

These videos offer an introduction to causality, using causal diagrams as a way into understanding the concept of identification, and how we can use modeling with data to make causal claims without having to run an experiment. The later videos also offer a basic overview of common causal inference methods. The videos can also be seen on a YouTube playlist here.

The first video in a series on causality, intended for my ECON 305 class Economics, Causality, and Analytics.

This video introduces the concept of writing our assumptions down in a causal diagram. These are going to come in very handy as we work with all sorts of real-world data generating processes and try to understand how we can get causal inferences out of them!

This video discusses how we can take our own ideas about how the world works and encode them in a causal diagram. You’ll be able to take what you know and put it in a format that you can actually use to identify your questions of interest.

This video discusses how to use the causal diagram-drawing website Dagitty.net. Dagitty will come in handy for a number of homework assignments that require you to draw diagrams. It also comes in handy generally in that it can tell you how to identify your effect of interest!

This video discusses how you can list the paths from X to Y on a causal diagram, determine whether they are front or back door paths, and select control variables to close the back doors and identify your causal effect of interest.

This video discusses what it means to control/adjust for a variable in order to close a back door, and walks through the mechanics of how it can be done.

This video discusses those tricksy collider variables - variables that close back door paths all by themselves, and will bias your results if you control for them!

The eighth video in a series on causality introduces the first tool for our causal inference toolbox: fixed effects, which allows us to control for certain kinds of variables even if we can’t measure them!

The ninth video in a series on causality introduces the concept of identifying a causal effect by comparing a treated group and a comparable untreated group. We then explore one way of doing this: constructing that comparable untreated group ourselves using coarsened exact matching!

This video covers one of the most widely-used causal inference methods: Difference-in-Differences (or Diff-in-Diff, or DID, or DD…).

This video covers what is considered to be one of the most trustworthy observational causal inference research designs: regression discontinuity, aka what happens when you are carried across a threshold.

This series of videos is somehow my most popular by far. It provides Stata tips for the advanced user, and assumes you already have a moderate knowledge of how to use Stata. You can see these videos on a YouTube playlist as well here.

The first video I made for YouTube and by far the worst video quality! This is a nice time-saving trick that I use with some regularity when I have to write some tedious code. Usually this approach is taking the place of a nicer-looking for loop of some kind directly in your stats package, but I find that using Excel like this can often save a lot of time over writing the for loop if the task isn’t nice and clean. Also, in some cases it’s easier to check that you’ve done it correctly, and make the process robust to strangely formatted data like what I have here.

This is a very useful trick that I use all the time whenever I am doing data editing in Stata. Stata-specific, too! Can’t get this to work in most other statistics packages.

Two of the trickest Stata commands that you will almost certainly finding yourself having to use if you’re manipulating panel data!

If there are missing observations in your data it can really get you into trouble if you’re not careful. Some notes on how to handle it.

If you’ve ever tried to use old versions of Stata to open datasets made in newer versions, you know it doesn’t quite work right! Let’s figure out how to do it.

This video will explain how to use Stata’s inline syntax for interaction and polynomial terms, as well as a quick refresher on interpreting interaction terms.

This will go over the basic syntax and information about foreach and forvalues loops in Stata, as well as how to use them with levelsof.

Just some nice quick shortcuts to save you some time in Stata! Nothing too major here, but these tips may save you a little bit of time.

Don’t you dare spend hours copying over every cell of your table by hand! There are many easier ways to get your results out of Stata. Goes over outreg2, mkcorr, and copying tables.

Don’t you dare spend hours copying over every cell of your table by hand! There are many easier ways to get your results out of Stata. Goes over outreg2, mkcorr, and copying tables.

This video will talk about some of the basics of bootstrapping, which is a handy statistical tool, and how to do it in Stata.

This describes the very simple steps for creating some very nice-looking tables for your research papers. It uses Stata to create the tables in the first place, but these steps could just as easily follow from a Stargazer output in R, or something else entirely. I won’t claim this is the only way to do it, but this is quick, easy, and always looks good!

The birth of a thousand internet comments, the Monty Hall Problem does actually have a single correct solution (assuming it was described correctly - if they forgot to mention that Monty knows where the car is it is indeed 50:50!). This video has nine different explanations to provide an intuitive understanding of this strange result.

Error messages! They are a constant part of trying to code. But too many students come up against one and have no idea what to do, or how to progress! But error messages are at least

This video is from a workshop held at Seattle University that went into how to use pandas to work with data, and how to think about data cleaning and wrangling and pull it off in pandas.

Not an intuitive process, interpreting those regression coefficients. But there’s a sly logic to them. This is an easily re-watchable video you can return to whenever you want to know about how to interpret a regression coefficient

Interactions are consistently the most squirrely and hard-to-handle parts of the standard undergraduate econometrics toolbox. But they’re really straightforward, you just have to slow down and think about them, walk through it. This video will show you how.