Tuesday, March 21, 2017

Leaks

I got to spend a year (July 2015-June 2016) working as a Senior Advisor at the Department of Housing and Urban Development.  I wasn't particularly high up in the pecking order, but I got to work with a number of people from HUD, Treasury and the White House who were.

Here's the thing (sorry for using a Sorkinism): all of these people--every, single, one--liked President Obama; were proud to work for President Obama.

Did they think he was perfect, or always made the correct decision?  Of course not.  But I have to say, in all the meetings in which I got to participate, there was reasoned deliberation, and comportment really mattered.  The atmosphere was professional and respectful.  And I think because of this, no one wanted to embarrass the president--certainly no one wanted to go out of his or her way to damage the president.

Just saying...

Sunday, February 19, 2017

Troika

Troika is the forecasting process for the federal government; it is called Troika because it is a joint project of Treasury, the Office of Management and Budget, and the Council of Economics Advisors.

Last year, while I was a Senior Advisor at HUD, I got a peek into Troika; I was invited to participate in a meeting to offer a perspective on the US housing market.  I am not going to say much about the details of the meeting, except that the Troika process is very much based on econometric modeling, that the modelers are really good at their jobs, and that the debates about the models are exactly the sorts of the debates one would wish government officials to have.  To give one example, at the meeting I attended, there was a debate about a parameter estimate.

The debate arose from the following conditions: suppose economic theory implies that a parameter b = b*. The estimated parameter b = b*+a.  The standard error of the parameter is 2a.  The debate was whether the forecast should be based on b*, or b*+a.  Needless to say, on could make reasonable arguments either way (showing that no matter how good a modeler is, she needs to rely on judgment at some point).

This is how government forecasting has been done--empirically, rigorously, and without an agenda. It saddens me to think that this is under attack.





Monday, January 23, 2017

Why I like what Quicken is doing

There is a piece in the New York Times from yesterday that sort of implies that Quicken Loan's rapid rate of growth (they are now the second largest FHA lender after Wells-Fargo) must mean the lender is up to no good.  But unlike Countrywide and WAMU, whose growth in the previous decade was the result of unsound lending practices, Quicken has developed a business model that, in my view, can result in lending that is sounder than traditional lending, expanded access to credit, and reduce loan applicant frustration.

I suppose I should say here that I have no financial interest in Quicken (it is closely held, so I couldn't even if I wanted to). I met its CEO, Bill Emerson, once, and spoke to him on the phone once, and we had nice conversations, but I would hardly say we now each other socially (for all I know he wouldn't even remember talking with me).  I have also had cordial conversations with other Quicken executives, which I think gave me a little insight into how the company operates.

Yesterday's piece notes that Quicken is viewed more as a technology company than a mortgage company, but it doesn't expand on what that means.  Here is what I think it means--it uses technology to improve quality control and compliance, and to do its own underwriting.  Specifically, when a potential borrower applies for a loan using the Rocket Mortgage app, she gives permission to Quicken to download financial information from the IRS, bank accounts, and other accounts. Because the information flows directly from the source, loan applications are complete and accurate, and hence comply with an important requirement for FHA loans.

The information is then run through the FHA TOTAL scorecard, where it receives an accept or refer (a refer means that for a loan to be approved, it can be manually underwritten, but is often rejected) and through Quicken's own underwriting algorithm.  The executives I spoke with at Quicken told me that the algorithm is updated frequently.  My guess--I don't know this for a fact--is that the algorithm's foundation is the sort of regression that I discussed in a previous post.

As noted in that post, statistically based algorithms can both improve access to credit and the performance of loans.  As the pool of potential borrowers becomes less and less like previous borrowers (in terms of source of income, credit behavior, family participation in loan repayment, etc.), using data to continuously improve and refine underwriting will be important for sustaining the mortgage market.  To the extent that Quicken is doing this, it makes the mortgage market better.

This is not to say it would be good for Quicken ultimately to dominate the market (such dominance is never healthy).  It would be nice to see fast followers of Quicken to enter the market.  But I suspect the reason the company has grown so rapidly is that it has built a better mousetrap.


Monday, January 09, 2017

Danny Ben-Shahar leads me to reflect on whether data should be treated as a public good

Danny Ben-Shahar gave a really nice paper (co-authored with Roni Golan) at the ASSA meetings yesterday about a natural experiment in the impact of information provision on price dispersion.  I want to talk about it, but first a little background.

Price dispersion is an ingredient in understanding whether markets are efficient.  When prices for the same good vary (for reasons other than, say, transport costs or convenience), it means that consumers lack the information necessary to make optimal decisions, and the economy suffers from deadweight loss as a result.

Houses have lots of measured price dispersion, even after controlling for physical characteristics. Think about a regression for a housing market, where

HP = XB + h+e

where HP is a vector of house prices, and X is a matrix of house characteristics.  The residual h+e has two components—unmeasured house characteristics, h, and an error term, e, which reflects “mistakes” consumers of houses make, perhaps because of an absence of information.  The h might reflect something like the quality of view, or absence of noise, etc.

When we run this regression, we can compute a variance of the regression residuals.  Because we can only observe h+e, we cannot know if this variance is the result of unobserved house characteristics, or of consumer errors.  But if h remains fixed, and there is an information shock that reduces consumer errors,  e will get smaller, and so will the regression variance.

Here is where Danny’s paper comes in.  In April 2010, authorities in Israel began publishing on-line information about house transactions, and in October 2010, they launched a “user-friendly web site.”  (Details may be found in the paper).  The paper measures the change in measured price dispersion before and after the information was publicly available, and, at minimum, found reductions in dispersion of about 17 percent. The paper takes pains to make sure their result isn’t a function of some shock that happened simultaneously to the release of the information.  For example, they show that price dispersion fell less in neighborhoods with well-educated people.  This could either reflect that (1) well educated people were better informed about housing markets to begin with, and so got less benefit from the new information or (2) that a greater share of the residuals in well-educated neighborhoods comes from non-measured house characteristics.[i]  In either event, the result is consistent with the idea that the information shock is what contributed to the decline in measured price dispersion.

So more information really does seem to produce a more efficient housing market.  The policy implication may be that data, in general, should be a public good.  Data meet half of Musgrave’s definition of a public good—they are non-rival (one person’s use of a data-set does not detract from another person’s use).  And while data are excludable (services such as CoreLogic show this to be true), their creation produces a classical fixed-cost marginal-cost problem.  The fixed cost of producing a good dataset is very large; once it is created, the marginal cost of providing the data to users is very low.  This suggests that the efficient price of data should be very low. 

Currently, data services have something like natural monopolies, with long downward sloping average cost curves.  Theory says that this means they are setting prices such that marginal revenue equals marginal costs, instead of setting price equal to marginal cost.  All this implies that data are underprovided.  Danny and Roni’s work shows that this under-provision has meaningful consequences for the broader economy.




-->



[i] BTW, this second interpretation is mine (I don’t want the authors on the hook for it if they disagree).

Wednesday, December 07, 2016

The Trouble with DTI as an Underwriting Variable--and as an Overlay

Access to mortgage credit continues to be a problem.  Laurie Goodman at the Urban Institute shows that, under normal circumstances (say those of the pre-2002 period), we would expect to see 1 million more mortgage originations per year in the market than we are seeing. I suspect an important reason for this is the primacy of Debt-to-Income (DTI) as an underwriting variable.

There are two issues here.  First, while DTI is a predictor of mortgage default, it is a fairly weak predictor.  The reason is that it tends to be measured badly, for a variety of reasons.  For instance, suppose someone applying for a loan has salary income and non-salary income.  If the salary income is sufficient to obtain a mortgage, both the borrower and the lender have incentives not to report the more difficult to document non-salary income.  The borrower's income will thus be understated, the DTI will be overstated, and the variable's measurement contaminated.  There are a number of other examples that also apply.

Let's get more specific.  Below are results from a linear default probability regression model based on the performance of all fixed rate mortgages purchased by Freddie Mac in the first quarter of 2004. This is a good year to pick, because it is rich in high DTI loans, and because its loans went through a (ahem) difficult period.  The coefficients are predicting probability of not defaulting.

-->
                              COEF          SE             T-STAT
FICO >= 620    .1324914   .0039244    33.76   
FICO >= 680    .1259424   .0021756    57.89   
FICO >= 740    .0600775   .0020249    29.67   
FICO >= 790   -.0030439   .0036585    -0.83   
CLTV >=  60    -.0336153   .0025297   -13.29  
CLTV >=  80    -.0375928   .0021508   -17.48   
CLTV >=  90     -.0155193   .0029713    -5.22   
CLTV >=  95     -.0261145   .0035061    -7.45   
DTI                    -.0013991    .000069   -20.26   
Broker              -.0439482   .0308106    -1.43   
Corresp.           -.0128272   .0277559    -0.46   
Other                -.0295511   .0277441    -1.07   
Cash-out           -.0520243   .0023775   -21.88   
Refi no cash      -.0364152   .0021331   -17.07  

The definition of default is ever-90 days late.  I tried adding a quadratic term for DTI, but it was not different from zero.  This is an estimation sample with 166,585 randomly chosen observations; I did not include 114,583 observations so I could do out-of-sample prediction (which will come later).  The default rate for the estimation sample is 14.34 percent; for the hold out sample is 14.31 percent, so Stata's random number generator did its job properly.  For those that care, the R^2 is .12.

Note that while DTI is significant, it is not particularly important as a predictor of default.  To place this in context, note that a cash-out refinance is 5.2 percentage points more likely to default than a purchase money loan, while a 10 percentage point change in DTI will produce a 1.3 percent increase the probability of default.  One can look at the other coefficients to see the point more broadly.

But while this is an issue, it is not a big issue.  It is certainly reasonable to include DTI within the confines of a scoring model based on its contribution to a regression.  The problem arises when we look at overlays.

The Consumer Financial Protection Board has deemed mortgages with DTIs above 43 percent to not be "qualified."  This means lenders making these loans do not have a safe-harbor for proving that the loans meet an ability to repay standard.  Fannie and Freddie are for now exempt from this rule, but they have generally not been willing to originate loans with DTIs in excess of 45 percent.  This basically means that no matter the loan-applicant's score arising from a regression model predicting default, if her DTI is above 45 percent, she will not get a loan.

This is not only analytically incoherent, it means that high quality borrowers are failing to get loans, and that the mix of loans being originated is worse in quality than it otherwise would be.  That's because a well-specified regression will do a better job sorting borrowers more likely to default than a heuristic such as a DTI limit.

To make the point, I run the following comparison using my holdout sample: the default rate observed if we use the DTI cut-off rule vs a rule that ranks borrowers based on default likelihood.  If we used the DTI rule, we would have made loans to 91185 borrowers within the holdout sample, and observed a default rate of 14.0 percent.  If we use the regression based rule, and make loans to slightly more borrowers (91194--I am having trouble nailing the 91185 number), we get an observed default rate of 10.0 percent.  One could obviously loosen up on the regression rule, give more borrowers access to credit, and still have better loan performance.  

Let's do one more exercise, and impose the DTI rule on top of the regression rule I used above.  The number of borrowers getting loans drops to 73133 (or about 20 percent), while the default rate drops by .7 percent relative to the model alone.  That means an awful lot of borrowers are rejected in exchange for a modest improvement in default.  If one used the model alone to reduce the number of approved loans by 20 percent, one would improve default performance by 1.4 percent relative to the 10 percent baseline.  In short, whether the goal is access to credit, or loan performance (or, ideally, both), regression based underwriting just works far better than DTI overlays.  

(I am happy to send code and results to anyone interested).

Update: if you want output files, please write directly to me at richarkg@usc.edu.  To obtain the dataset, which is freely available, you need to register with Freddie Mac at link referenced above.


  
-->