Correlated Returns are Insufficient for True Alignment

When people talk about incentive alignment, what they really mean is correlated returns.

Consider equity as a solution principal-agent problems. Since founders and employees are both paid as a percent of company valuation, their returns are correlated, and incentives are mostly aligned. [1]

But even where returns are perfectly correlated, there’s no guarantee that actors actually have each other’s best interests in mind. A more concrete way to think about this is: would you be willing to delegate your decisions to the other party? This is what I mean by “true alignment”.

This can go wrong in at least three ways:

  1. Returns are correlated, but with different opportunity cost
  2. Returns are correlated, but with different levels of risk
  3. Returns are correlated, but with different second degree utility

1. Returns are correlated, but with different opportunity cost

Real estate agents are typically paid on commission of a home’s sale price. As a result, their returns are really well correlated with the homeowner’s:

In theory, this ought to mean that real estate agents and homeowners are truly aligned.

In practice, the real estate agent is the one doing work, which means that they’re the one paying opportunity cost.

Let’s say an agent can expect to earn $200/day, and gets paid 6% of home sale price. It’s not worth it for them to spend another 5 days of work pushing up your home price from $500k to $510k when this only nets them $600 against an opportunity cost of $1000.

In contrast, a homeowner would happily wait a week to get another $10k, or $9.4k after commission. Even factoring into account the time value of money, an additional 2% week-over-week is great.

If both sides were explicit about incentives and willing to broker a side deal, the homeowner would happily pay the full $1000 opportunity cost to get an extra $9000. But in the absence of transparency, the real estate agent will do their best to convince the owner to sell, even at net loss to their collective interests.

The result is correlated returns without true alignment. Neither side ought to trust the other.

2. Returns are correlated, but with different levels of risk

Another classic example is Venture Capitalists versus Founders. For any given startup, a founder might own 30%, and the VC 10%, so their returns are really well correlated:

In theory, this ought to mean that VCs and Founders are truly aligned.

In practice, VCs are massively diversified across startups, providing insulation against any one failure. As a result, they’re eager to push for riskier bets, even when it’s against the interests of an individual founder.

3. Returns are correlated, but with different second degree utility

Say you’re going out to dinner with a wealthier friend, and you’ve agreed to split the bill. No matter where you go, you’ll be paying the same amount, and getting the same consumption, so returns are really well correlated:

In theory, this ought to mean that wealthy and less wealthy diners are well aligned. You should be willing to let your friend pick the restaurant, and visa-versa.

In practice, each party may pay the same financial cost, but has a different marginal utility for their money. Assuming utility is something like log(wealth), the utility cost to a $100 dinner are far greater for you than for your wealthier friend.

Even assuming you have the same gastronomic tastes, your incentives are not actually aligned. You should not trust your friend to pick a restaurant. [2]

Conclusion

Taking “true alignment” to mean “would be willing to delegate decisions”, and “correlated returns” to mean “correlated first order financial returns”, everything I’ve said is true.

But in all these cases, the trick is just to think about the actual experienced utility rather than first order financial returns.

It’s presumably pretty straightforward to sell a home for a fair price, but much more work to sell for substantially more than what it’s worth. So the opportunity cost increases faster than the price of the home. Taking that into consideration, we can chart the agent’s returns with a consideration of non-linearly growing opportunity cost:

Which makes it clear that returns were never really correlated to begin with.

Similarly, risk-avoidance is a function of diminishing returns from wealth. As a result, founders would prefer a more certain return, while diversified VCs can afford to pay in risk for higher expected returns.

So the conclusion is not that something wacky is going on and alignment is impossible. It’s just that we have to take into account returns to actual utility rather than just naively looking at immediate finances.


Postscript

I framed these as three different examples, but in a sense, they’re all the same.

The Venture Capitalist is risk-indifferent because they’re diversified. The founder is risk-averse because they experience diminishing returns from wealth to utility. Otherwise, a 1% chance of a billion dollar exit would be just as good as a guaranteed $10 million exit. So this is outwardly about risk, but really about second degree utility.

Arguably, diminishing returns to wealth are also just a function of opportunity cost. The more money you have, the less consumption in one place becomes a substitute for consumption elsewhere. This is also the case of a concave production-possibility frontier: the more you produce one good, the more the opportunity cost increases. This gives us a sense of diminishing returns, even without appeal to the hedonic treadmill or other psychological effects.


[1] Note that this only works for early employees at startups. The more your compensation is defined by salary rather than equity, the more your returns are fixed, and the less aligned you’ll be with your founders.

[2] Unless of course, they offer to treat you.

Lambda School's Incredibly Naive Incentive Alignment

Lambda School is the most incentive-aligned education in the world.

It is so incentive-aligned, that their fundraising announcement was titled “Lambda School Raises $74 million for Incentive-Aligned Education”. Another blog post, tilted “Taking Baby Steps Toward Incentive-Aligned Higher Education and Job Training” includes the phrases

  • With incentive alignment, schools don’t succeed unless their students do.
  • We’ll continue to share updates on incentive-aligned education
  • Incentive alignment between schools and students is the objective.
  • As long as the underlying principles of student protection and incentive alignment hold true, that’s what matters.
  • …create an education model where the incentives of the school and student are aligned
  • …keep financial risk low for students and maintain incentive alignment.

Lambda’s big innovation is the Income Share Agreement, a mechanism that allows students to pay the bootcamp as a percentage of their salary, rather than paying tuition upfront. This ensures that the bootcamp is free to attend until you get a job, and ensures that Lambda is extremely incentive aligned.


But is it actually?

Here’s the details of the Lambda deal:

  1. Students pay 17% of their salary for 2 years,
  2. Payments are capped at $30,000, no matter how much you’re making
  3. Students only pay if they’re making over $50,000

With this information, we can graph Lambda School’s 2-year returns against raw student income:

Those flat lines represent the cap and floor. The section in the middle is where Lambda Returns grow as a function of Student Income.

But remember that these returns come out of student income. So account for Lambda payments and taxation, here’s the same chart, but now with another line for Actual Student Income.

(Source)

That huge drop at $50,000 is the point where tuition payments kick in, and Actual Student Income drops off.

This is a classic example of a perverse incentive: students would actually earn more with a lower salary [1]. As payments and taxes scale with earnings, students aren’t back to the actual income they had at a raw salary of $49,000 until they hit a raw salary of $67,000, nearly $20,000 later.

Accordingly, we can divide the graph into 4 segments, and for each, ask if incentives are actually aligned:

  • 1) Not Aligned: Actual Student Income increases, but Lambda Returns do not.
  • 2) Misaligned: Actual Student Income drops, but Lambda Returns grow sharply.
  • 3) Aligned: Actual Student Income and Lambda Returns grow together.
  • 4) Not Aligned: Actual Student Income increases rapidly, but Lambda Returns are flat.

To summarize, the actual incentive-alignment portion of this graph only happens in section 3 (when raw income is between $67,000 and $88,000). Despite their repeated claims, Lambda School is not actually incentive-aligned.


Okay, fine, but does this actually matter? Maybe all salaries fall in that incentive-aligned range, and so in practice, Lambda is effectively incentive-aligned.

We can test this hypothesis against Lambda’s table of reported student outcomes:

We want to know how many students make between $67,000 and $88,000. Using a naive assumption of uniform distribution within buckets, we get 47 students, or 26% of the total population. [2]

To make a closer approximation, we can estimate an underlying normal distribution.

The range we’re curious about ($67k - $88k) doesn’t align with the bucketing, but we can make an educated guess by estimating the underlying normal distribution:

That’s with mean $72,000, SD $26,000. This is still naive, but seems to be a decent approximation, and probably better than the uniform assumption. Using this model, we estimate that 31% of graduates are earning $67,000 - $88,000.

To sum up, Lambda School appears to be incentive-aligned with around one third of their students.


While Lambda School is not properly incentivized to improve income for the majority of its students, you might still argue that it is at least incentivized to get them a job.

This is fair, but also true of every other educational institution. Flatiron School releases data on job placement outcomes, as does General Assembly, and does every university. It doesn’t matter if you literally get paid as a portion of student income, every institution has a reputation to uphold, and a case to make for its value.

It is true that Lambda School only charges tuition if you get a job earning more than $50,000, and that’s great. For many students however, I expect this calculation to be dominated by the question of whether or not they’ll actually have a job at the end, a matter on which Lambda School has been notoriously deceptive.


[1] This concept of being punished for earning money is at the heart of the libertarian critique of taxes. It’s absolutely bizarre that Lambda has appropriated libertarian rhetoric without any of the accomplaying economics.

[2] (75 - 67) / 25 * 71 + (88 - 75) / 25 * 46

Austen Allred is Consistently Deceptive

Lambda School is a coding bootcamp that only charges tuition if you get a job. Founder/CEO Austen Allred frequently takes to Twitter, defending his bootcamp against allegations of fraud, and rebutting critics with case after case of student success.

I was initially excited about Lambda School, but have slowly grown disillusioned over time. So when they released their 2019 H1 Outcomes Report, I was excited to finally access a ground truth and put an end to all the speculation.

Instead, I found a consistent pattern of deception.

In the rest of this piece, I’ll walkthrough a number of examples, in which Allred:

  • Claims a job placement rate of 86%, when the actual number is as low as 55%, and at least as bad at 70%
  • Misrepresents graduate salaries on Twitter, despite claiming a random sample
  • Calls regulatory approval a “significant endorsement”, despite a troubled history of bans
  • Lies about having been homeless

Some of these are blatant and explicit, some are more subtle. I’ve done my best to present the facts fairly, and leave the rest up to your judgement.

1. Allred Misrepresents Student Outcomes

In early 2019, Allred took to Twitter to share student outcomes, listing the following salaries:

  1. J: $80,000+
  2. S: $80k
  3. D: $110k base, ($130k total)
  4. C: $130k
  5. T: $90k
  6. L: $140k
  7. J: $85k
  8. L: $89k
  9. A: $85k
  10. R: $75k
  11. S: $70k

A year later, when the Outcomes Report was released, it cited a median salary of just $70,000. In stark contrast, Allred’s 11 Tweets report exactly 0 students making less than $70,000. Statistically, that’s like flipping a coin and getting 11 heads in a row (probability .00049).

This seems bad, but whatever, Allred is just highlighting a few heartwarming anecdotes right? Surely he doesn’t actually claim that this is a randomly selected sample?

Except he absolutely does:

To see just how poorly Allred’s claims line up against the official Outcomes Report, we can plot the histograms side by side:

Allred claims that his examples were randomly selected, but his own statements are contradicted by the official outcomes report.

2. Allred Repeatedly Misrepresents Job Placement Rates

Earlier this year, New York Magazine reported that Lambda school had marketed itself as having an 86% job placement rate, but then released an investor memo reporting a much lower rate of just 50%.

Allred took to podcasts, explaining in an interview with Jason Calacanis:

When we talk to investors, we talk in terms of enrolled students… of those students who were enrolled, X% were hired. Whereas when schools speak publicly, they speak in terms of graduated students. So obviously those two numbers are always going to be different.

He goes on to clarify:

That 86% was somewhat out of date… so we quickly put together what the real numbers would have been, which was 78%.

Okay, so that seems reasonable enough. Case closed?

But then the official Outcomes Report was released, and contradicted Allred’s own statements:

  • Of the 448 students in the cohort, only 318 graduated
  • Of those 318, only 284 graduated on time
  • Of 284 graduates, Lambda could only reach 255
  • Of those 255, only 201 had jobs
  • Of those 201, Lambda “did not have salary data… due to our data collection methods”, giving us data for 178 students

The job placement rate for enrolled students is 201 out of 448, or 45%. The job placement rate for graduated students is 201/318, or 63%.

It actually gets worse. The original Lambda School claim was:

86% of Lambda School graduates are hired within 6 months and make over $50k a year

But according to the outcomes report, 26 of the placed graduates are making under $50,000. The actual placement rate for graduates making over $50,000 is just 55%. That’s a far cry from the 86% they claim, and much closer to the 50% reported by New York Magazine.

To be clear about timing, the New York Magazine article was published February 19th 2020, the interview was on March 3rd, and the Outcomes Report was published March 28th 2020. So these should all be about the same cohort, or at least a similar set of students.

So where does the 78% come from? Following article, Allred published a note on LinkedIn with the following chart:

So 78% is what you get if you ignore students who graduated late, ignore students who have yet to graduate, ignore the “graduated and disengaged” students, and pretend everyone is making over $50,000.

To his credit, Allred does provide another table which includes disengaged sudents and reports a 70% placement rate. Not only is this a far cry from the 86% originally reported, it also includes the 15% of placed students making under $50,000.

Although it may seem fair to count late graduates in the H2 report, note that this is a misrepresentive sample. Students who graduate late will likely have worse results later on, and underperform students who graduated on time. If Lambda wanted to account for this, they could have included 2018 H2 late graduates in the 2019 H1 report, but they neglected to do so.

In March 2019, Allred wrote:

Regulators love us - they’re sick of for profit schools promising a lot and delivering little except debt. [emphasis mine]

December the same year, it was revealed that Lambda was actually operating illegally, with no approval from regulators.

Okay, so this looks bad, but it doesn’t mean Allred is lying. It’s entirely possible that he thought regulators loved Lambda in March, and wasn’t corrected until a $75,000 fine hit the following month.

When Lambda finally did get approval to operate in California, Allred wrote

Over the past year, Lambda School worked to advance an ambitious goal: become the first online school approved by California regulators to offer ISAs. We achieved a major victory when state regulators licensed us as a school in August, representing a significant endorsement for our all-online, career-focused education model. However, in order to secure this approval, we made the difficult decision to stop offering our ISA option to students in California.

To be clear, none of this is an outright lie, but it is a dishonest representation of the regulatory environment. To recap:

Then Lambda applies for approval, leading to this saga in which:

  • July 2019: Regulators deny the application, ordering Lambda to cease operations
  • Lambda continues operations anyway
  • December 2019: Lambda re-applies for approval, is told to cease operations
  • Lambda continues operations anyway
  • June 2020: Lambda submits another application, regulators say they cannot use ISAs
  • Lambda finally agrees to stop using ISAs in California
  • October 2020: Allred calls regulator approval a “major victory” and “significant endorsement”

As best I can surmise, those are the facts of this case. You can decide for yourself if you think Allred is acting with integrity.

Bonus: Allred Lies about Experiencing Homelessness

This has nothing to do with Lambda School, and is admitadely somewhat gratuitous. But it does point to continued dishonesty on the part of Allred, and is too blatant to ignore.

In 2017, Allred tweeted:

I was homeless, had no skills, no degree, just $300 and a laptop, but that’s all you need.

In a 2018 Hacker News comment, he wrote:

I’m also formerly homeless… I know how hard it is to focus on getting a job when you’re just trying to survive

An article includes a screenshot of an Allred’s reply to a tweet which reads:

I have been homeless, slept on those same streets [1]

The arc from homeless to multi-millionaire founder would be inspiring, except that it isn’t actually true.

In a 2013 Hacker News comment. Allred writes:

I lived in a Honda Civic this summer as I was getting a startup off the ground… and I had a half dozen people offer to let me stay at their place or crash on their couch rent-free.

…As a result, I wasn’t living in a car for lack of other options, but rather out of belief that I could create something by sheer will-power, and that I was going to do that come hell or high water. My homelessness was a matter of seeking something greater than myself, not being lost to poverty. [emphasis mine]

Similarly, in a now-deleted blog post titled Voluntary Homeless in Silicon Valley Allred writes:

Candidly, living in a car in Silicon Valley had much more appeal to me than a single bed and a shared bathroom… Much more than money, what fuels me is obsession with minimalism, reading way too much Thoreau, and trying to continually see life from a different angles…  It’s about questioning society at its fundamentals, and seeing what a life not tied down by a foundation really feels like. And so far I love it.

So again, Allred’s claims are… not technically a lie? I mean he did sleep out of a car, but he did it voluntarily. You’re free to make your own judgements, but when I hear “I have been homeless, slept on those same streets”, I do not think of someone with half a dozen offers of free housing. That isn’t poverty, it’s cosplay.

Conclusion: Why does any of this matter?

Am I just piling onto an already heavily criticized company? Am I guilty of being a critic rather than a creator?

I initially wrote about this purely using Lambda School as an example to illustrate a broader point about incentive alignment. But the more I read, the clearer it became that they were just consistently dishonest. Not outright fraudulent, but just really misleading about everything from student outcomes, to regulatory pressure, to the founder’s own history.

Still, why does any of this matter if Lambda is genuinely educating and helping students?

First of all, despite the outcomes report, we still have no idea what the median outcome actually is! They’re reporting data for 178 out of 448 enrolled students, which is just 40%. According to the report, the other 60% of enrolled students either:

  • Remain unemployed (54 students, 12% of cohort)
  • Graduated late or not at all (164 students, 37%, of cohort)
  • Became mysteriously unreachable, or went mysteriously uncounted (52 students, 12% of cohort)

And none of them are included in the salary calculation. So what’s the actual median outcome? We have no idea.

Second, if their outcomes are actually good, why do they have to constantly lie? As Vitalik once said:

If you have a good way of proving something, and a noisy way of proving something and, you choose the noisy way, that means chances are, it’s because you couldn’t do the good way in the first place.

If Lambda actually cares about transparency, they should just report:

  • How many students have ever enrolled
  • How many students have been placed in jobs earning over $50,000
  • Average and median time to placement

And then I’ll happilly shut up and accept that exagerated marketing is sometimes required to make good things happen. Until then, pointing out that their transparency report is not actualy transparent and their anecdotes not actually representative is fair game.


[1] Allred’s tweet is deleted, but the original is still up. There is another tweet replying to Allred, suggesting that he did have a reply before deleting it.


Coda:

It’s me, I’m the asshole.


EDIT 11/19/2020:

A few more things to mention:

  • Lambda Fellows launched today, a seemingly illegal unpaid internship. Because nothing screems “valuable skills” like giving away labor for free.
  • Lambda’s Outcomes Report is descirbed as “biannual”, but over 6 months since publication, there has been no news of a subsequent report for the 2019 H2 cohort.
  • One more example of questionable honesty: Allred claims on Hacker News that he canceled a cohort’s ISAs because it was “the right thing to do”, but a student from the cohort claims it took “months of work” and an greement not to sue.