Tuesday, June 10, 2014

25 Websites, Blogs, Books, & Courses I'd Like to See

  1. CouldBe
  2. MaybeMaybeNot
  3. ConfirmingMyBias
  4. A Sample of One
  5. Unbiased But With a Variance of Infinity (HT Jeff Hammer)
  6. Optimizing Around the Wrong Mean
  7. Embroidering on the Head of a Pin
  8. Much Ado About the Wrong Thing
  9. What Is The Role of Intelligence
  10. Things We Used to Think (and Might Again in the Future)
  11. What Do We Know, and How Do We Know It?
  12. How Should I Know?
  13. Maybe I am Wrong
  14. Fifty-one Percent
  15. Until Proven Wrong
  16. MostLikely.com
  17. OnTheMargin.com
  18. Not The View of the Author
  19. Stealing Home
  20. Causation, not Correlation ;)
  21. Drinking My Own Kool-Aid
  22. Eating My Own Dog Food
  23. Less Quantity, More Quality
  24. Mean Time to Moron
  25. Push to Failure

Monday, June 02, 2014

100 Days of Gratitude, Day 36 - Barbara Gee

The Awesome Barbara Gee
There are many early but unheralded heroes of GlobalGiving, such as Randy Komisar and Debra Dunn.  But none played a more important role than Barbara Gee, who sat Mari and me down at her kitchen table in Menlo Park in late 2000 and said "So you don't have a business plan? That's ok; let's write one right now."

Barb had been recommended to us by Randy, who knew her from their days at the legendary Silicon Graphics (and before that she was at Hewlett Packard).  He knew that Barb was a bleeding heart who wanted to put her hard core business skills to work for good.

Starting a new organization from scratch after fifteen years as a bureaucrat was terrifying, to say the least.  We knew absolutely nothing about marketing, website technology, product development, financial statements - not even about how to establish and run a payroll or HR system.  All of those things had been done for us at the World Bank.  Startups require a lot of bluster, and in the early days that was pretty much all I felt I had.

No matter.  Barbara was our guide to how to think about and run a business.  She was extremely matter of fact, and taught us how to delve into new areas and just make decisions, knowing that we would have to revisit those decisions in the future.  She helped us work on pitches, think about business development, manage tech firms - you name it.  She even put us up in her garage apartment (the one with the bathroom in the kitchen!) on our first few trips to the west coast.

Barb has done many interesting socially innovative things over the past decade, and she is now VP of the Anita Borg Institute, a powerful network of women in technology.  The other day I realized that, for all the help she gave to us - and by extension to the 10,000 projects in 144 countries that GlobalGiving has supported - we paid her the sum of exactly zero dollars and zero cents.

This was not our intent; in the early years we had no money.  But I still feel bad about that, because  I for one believe that highly committed and competent people should not have to always "do good for free" by volunteering their time.  So I will be looking for an opportunity to re-pay Barbara part of what she has earned.  (I seem to recall that Barbara is a Buddhist, so the good news is that I can pay her in my and her next life if not this one.)

Thank you for everything, Barbara.

Share on Facebook

Tuesday, April 08, 2014

Regular people out-forecast CIA agents

What's so challenging about all of this is the idea that you can get very accurate predictions about geopolitical events without access to secret information. In addition, access to classified information doesn't automatically and necessarily give you an edge over a smart group of average citizens doing Google searches from their kitchen tables.
That is from a recent NPR piece about how a group of regular citizens have been able to make more accurate forecasts than CIA experts.  To its credit, the CIA actually sponsored this work, in collaboration with Philip Tetlock, whose work I have blogged about before.  It also gives further weight to the importance of taking a dose of humility along with your own perceived expertise.

The CIA official who helped sponsor the work remains hopeful that experts still have a role, however:
Matheny doesn't think there's any risk that it will replace intelligence services as they exist. 
"I think it's a complement to methods rather than a substitute," he said. 
Matheny said that though Good Judgment predictions have been extremely accurate on the questions they've asked so far, it's not clear that this process will work in every situation. 
"There are likely to be other types of questions for which open source information isn't likely to be enough," he added.

Share on Facebook

Saturday, March 08, 2014

Tyranny of Experts - Consumer Reports Edition

How often have I needed to replace an appliance and just gone to Consumer Reports, picked one of the top couple of models (usually the top one), and bought it sight unseen, happy in the knowledge I had bought the best - and with little effort?  Today, when I went online to decide on a vacuum cleaner,  I realized what a mistake that can be.

Consumer Reports gave the highest rating to a Kenmore, saying it is "impressive" and the "top-pick:"

I found it online and was about to purchase, but then looked down at the user ratings and saw that users gave it only 2 stars out of a possible 5.  Worse, 75% of actual users would not recommend it to a friend.  Here is the summary:

And here is a typical user review:

It took me about 5 minutes of scanning the user reviews to decide not to purchase the Kenmore.  And to be honest, I was pretty shocked.  Despite what the experts said, regular people seemed to hate this unit.

So I started skimming down the page to look for models that users themselves seemed to like.  I found a Miele, which was given mediocre ratings by the experts at Consumer Reports.  But actual users gave it 4.4 out of 5 stars:

And here is a typical user review of the Miele:

Now I'm not saying the Consumer Reports experts aren't smart.  I am sure they are.  And in some cases, their ratings agree with those of users.  The problem is that their own scorecards for quality are based on factors that are not well aligned with what consumers themselves actually care about.

Will I continue to subscribe to Consumer Reports?  Probably.  Will I listen blindly to their recommendations?  No.  From now on I will start with feedback from consumers - and then, if I have time, I will read the expert reviews.

[This post elaborates on a piece I wrote earlier for the Center for Global Development.  For more on the topic of feedback loops, citizen sovereignty, and development, see my upcoming review of Bill Easterly's new book, The Tyranny of Experts. Or better yet, buy the book itself. ]

Share on Facebook

Tuesday, February 25, 2014

Expertise and Humility

At a conference the other day, someone introduced me as a leading "expert."  I am as susceptible to flattery as the next guy, but over time I have grown more aware of the limits to what I know.

The general understanding is that you can have "hard knowledge" in the natural sciences, but less so in the social sciences.  Hard knowledge means that you can replicate what you know over and over again in the real world via experiments or projects.  Bridge engineers are typical examples - give them any known terrain and they can build bridge after bridge that will stand up in every day use.   This is because the properties of the materials are well known, and the interactions between the materials are relatively simple.  Standard engineering formulas can thus be taught in schools around the world, and engineers who speak different languages can successfully converse in the lingua franca of mathematics.

As we move into the domain of human interactions, the properties of the materials (people) are highly variable from person to person.  What's more, the interactions between individuals often affect the properties of the individuals themselves by changing emotions, feelings, attitudes, strategies, and even body chemistry. Human systems are thus hugely more complex than most others physical systems, and we quickly lose the ability to model them in any dependable way.  How social interactions occur in a particular family, neighborhood, town, province, country, or continent can only rarely be generalized to different families, neighborhoods, and countries.

Recently, there has been unsettling news even for medical research - an area where we used to consider our knowledge "hard."  Last year, for example, Begley and Ellis, two researchers, looked at 53 landmark cancer studies and found they could replicate the results of only six of them.  Just last week, a 25-year retrospective study found that mammograms have no impact on death rates from breast cancer, overturning decades of conventional wisdom.

Philip Tetlock's work has shown that experts in a wide variety of disciplines are no better - and often worse - at prediction than computers or untrained people.  As I noted earlier:
College counsellors who were provided detailed information about students and even allowed to interview them did worse in predicting their freshman grades than a simple mathematical algorithm. Psychologists were unable to outperform their secretaries in a test to diagnose brain damage.
Closer to my field of international development, Michael Clemens and Justin Sandefur of the Center for Global Development have examined the work of Jeff Sachs and Paul Collier - two experts highly sought after by aid agencies for their insights.  After examining Sachs's work on the Millennium Village Project and Collier's work on migration, Clemens and Sandefur found fundamental errors in analysis, data, and logic.

After reading these devastating critiques, I began to wonder how many studies go unexamined.  There aren't enough Begleys, Ellises, Clemens, and Sandefurs to go around to review all research.  We used to assume that professional journals performed this type of quality-control function, but just this month two journals were forced to retract 120 articles that they found had been computer-generated.  A couple of years ago, even a prestigious peer-reviewed mathematics journal had to retract a machine-generated "nonsense paper."

A 2005 paper by John Ionnidis took a macro approach to this question.  It looked at the way that much research is done and concluded that, because of statistical problems, design flaws, and skewed incentives, the majority of published research findings are false.  This paper has been the subject of much discussion, but whatever the exact numbers it seems clear that most research findings should be taken with a healthy degree of skepticism.

All of this is not to disparage the value of research and analysis.  To the contrary, it calls for more.  It calls for a lot more trial and error, as suggested in Jim Manzi's excellent recent book Uncontrolled: The Surprising Payoff of Trial and Error for Business, Politics, and Society.  It calls for more "problem-driven iteration" as recently described by Andrews, Pritchett, and Woolcock.  It calls for much greater use of feedback loops in aid, philanthropy, and governance, with pioneering work being done by groups such as Feedback Labs.

But it does suggest the limits of what we (think we) know in many fields.  It calls for modesty and humility.  It calls for us to hold our beliefs lightly.  And, lest we fail to hold our beliefs lightly, we should keep in mind the damage we can do when we don't, as so powerfully described in Bill Easterly's new book The Tyranny of Experts, which I will review in an upcoming post.

Wednesday, January 15, 2014

The $100 Million Men

Dave Goldwyn
Dave Goldwyn - $100 Million Man 
We just passed $100 million in online transactions on GlobalGiving, with over 9,400 projects funded in 140 countries.  This is due to the amazing generosity of 370,000 donors and an amazing list of many of the world's most innovative and rapidly growing companies - not to mention what I personally think is the world's best network of strategic partners and funders.

And you want to talk about a truly exceptional team that delivers wow, wow, WOW?  We got that, too (it's our secret sauce, as anyone who has met them knows).

But that's not all.

Tom Bird - $100 Million Man
In the early days, we would go for long stretches with ZERO donations going through the website.  I still remember when we hit our first $100,000, which seemed like a miracle after so much trial and error and so much blood, sweat, and tears.

There were times early on when many people said "This will never work - it's a pipe dream, you should have stayed at your nice secure job at the World Bank."  Since at that time we were the first (and only) global online crowd funding platform,  I admit I wondered if the nay-sayers were right.

Why did we keep going during that tough start-up phase? And how did we go from $100,000 to where we are now? The answer in no small part lay in our first two board chairs, Dave Goldwyn and Tom Bird.  I call them GlobalGiving's $100 Million Dollar Men.

In the non-profit field, being on boards is often a thankless task. You don't get paid, the organization you are trying to help is hampered by all sorts of constraints, and you don't even get much recognition.  As a result, it's hard to find board members who are both really good and also willing to commit the time and effort needed during the stressful early years. Dave and Tom broke the mold in terms of competence combined with commitment, and we would not be anywhere near where we are today if we had not been lucky enough to have them at the helm.

Fran Hauser - the $1 Billion Woman?
Someone said to me yesterday "I can't wait for GlobalGiving to get to the next $100 million!" I replied, "But why would we be satisfied with that?  We won't be happy until we achieve $1 billion." His jaw dropped.  And to be honest, I surprised (even scared) myself.

But then I reflected on our new board chair, Fran Hauser, and her extraordinary fellow board members. So when I ask myself how like it will take before I writing a post titled "The $1 Billion Woman," I know the answer: not long.

Stay tuned.

Monday, October 07, 2013

Fighting our Biases, Empathy Edition

Here is an example of how reporting on social science research can mislead rather than inform. The author tells us about new studies that show rich people are less empathic (i.e., they care less about others) than poor people. While this may be true on average (and the author gives several reasons why this might be so), the article likely inflames biases rather than illuminates reality.

Why? Because the author neglects to tell us: a) how big the difference in empathy between the two groups is, and b) the underlying distribution of empathy within the rich and poor groups.

The almost inevitable result is that the article will spur one group to be more self-righteous and confident and the other group to be more defensive and angry.  

The underlying analyses generally involve complex regressions, but let's simplify a bit so we can imagine two competing realities behind the results of the study cited.

Figure 1 reveals what most readers will take away from a newspaper story like this.  It shows two distribution curves, one of rich people, and one of poor people.

Figure 1: Huge difference; no overlap
In this scenario, not only do rich people have much less empathy (C) on average than do poor people (D), but ALL rich people have less empathy than ALL poor people. The two distribution curves don't even intersect.

Contrast this with Figure 2:

Figure 2: Small difference; large overlap
In this scenario, rich people do have less empathy on average than poor people.  But the difference (B-A) is much smaller.  And there is huge overlap between the two groups.  There are plenty of rich people who are empathic, and plenty of poor people who are not.

Figure 2 more accurately represents the results of most social science research.  When scientists get lucky, they find a characteristic (in this case, wealth) that has a statistically significant impact on another characteristic (in this case empathy).  But statistical significance does not mean that the difference is large. And even more rarely does it mean that the underlying groups have little or no overlap.

[PS:  Apologies for the poor graphing skills.]

Share on Facebook

Wednesday, October 02, 2013

How to Get Lucky

Luck has a lot to do with success.  That is both my experience and the conclusion of a lot of research, including by Daniel Kahneman, winner of the 2002 Nobel Prize, who famously said:
Success = Talent + Luck. 
Great Success = A little more Talent plus a lot more Luck.
Many successful people find this assertion annoying, since they feel that they have worked very hard to succeed.  As noted in my previous post, these people suffer from survivorship bias: they fail to notice all the other people in the world who worked equally hard but failed to succeed.  (Warren Buffet is a noted exception; he has repeatedly noted that he was in the right place at the right time.)

But the good news for those less enlightened (and lucky) than Warren Buffet is that luck can be cultivated - it's not merely a matter of chance.  Psychologist Richard Wiseman argues that luck is the outcome of human interaction with chance:
Wiseman speculated that what we call luck is actually a pattern of behaviors that coincide with a style of understanding and interacting with the events and people you encounter throughout life. Unlucky people are narrowly focused, he observed. They crave security and tend to be more anxious, and instead of wading into the sea of random chance open to what may come, they remain fixated on controlling the situation, on seeking a specific goal. As a result, they miss out on the thousands of opportunities that may float by. Lucky people tend to constantly change routines and seek out new experiences. 
This at least leaves open the possibility that we can make ourselves luckier by changing our outlooks:
Wiseman saw that the people who considered themselves lucky, and who then did actually demonstrate luck was on their side over the course of a decade, tended to place themselves into situations where anything could happen more often and thus exposed themselves to more random chance than did unlucky people. The lucky try more things, and fail more often, but when they fail they shrug it off and try something else. Occasionally, things work out.
So luck is part persistence, but the speed at which we experiment is also key:
The people who labeled themselves as generally unlucky took about two minutes to complete the task. The people who considered themselves as generally lucky took an average of a few seconds. Wiseman had placed a block of text printed in giant, bold letters on the second page of the newspaper that read, “Stop counting. There are 43 photographs in this newspaper.” Deeper inside, he placed a second block of text just as big that read, “Stop counting, tell the experimenter you have seen this and win $250.” The people who believed they were unlucky usually missed both. 
And finally, part of the challenge is a willingness to abandon hypotheses and assumptions when they don't pan out:
What you can’t see, and what they can’t see, is that the successful tend to make it more probable that unlikely events will happen to them while trying to steer themselves into the positive side of randomness. They stick with it, remaining open to better opportunities that may require abandoning their current paths, and that’s something you can start doing right now without reading a single self-help proverb, maxim, or aphorism. 
[The quoted passages are from David McRaney.]

Share on Facebook

Anybody Read the Latest MisFortune Magazine?

If you are thinking about opening a restaurant because there are so many successful restaurants in your hometown, you are ignoring the fact the only successful restaurants survive to become examples. Maybe on average 90 percent of restaurants in your city fail in the first year. You can’t see all those failures because when they fail they also disappear from view.  
These words of wisdom are from David McRaney, whose work illuminates common human mistakes - in this case, survivorship bias. If we want to know how to succeed, shouldn't we look to successful people and companies for lessons to follow? The paradox is that we can often learn more from those who fail:
Survivorship bias pulls you toward bestselling diet gurus, celebrity CEOs, and superstar athletes... You look to the successful for clues about the hidden, about how to better live your life... Colleges and conferences prefer speakers who shine as examples of making it through adversity, of struggling against the odds and winning. The problem here is that you rarely take away from these inspirational figures advice on what not to do, on what you should avoid, and that’s because they don’t know. Information like that is lost along with the people who don’t make it out of bad situations or who don’t make it on the cover of business magazines – people who don’t get invited to speak at graduations and commencements and inaugurations.  
McRaney goes on:
If you spend your life only learning from survivors, buying books about successful people and poring over the history of companies that shook the planet, your knowledge of the world will be strongly biased and enormously incomplete. As best I can tell, here is the trick: When looking for advice, you should look for what not to do... [K]eep in mind that those who fail rarely get paid for advice on how not to fail, which is too bad because despite how it may seem, success boils down to serially avoiding catastrophic failure while routinely absorbing manageable damage.

Thursday, June 27, 2013

How I Made 12% Over Two Years by Sheer Procastinastion

Here is an update on a piece I wrote about gold fever in early 2011.  By failing to jump on the bandwagon, I made over 12% on my non-investment (i.e., I avoid losing it).

PS:  The guy I (fortunately) did not listen to worked for a celebrated hedge fund investor who made billions of dollars in 2007 by short selling sub-prime mortgages.

PPS: Check out the excellent comments on that post by Ian Thorpe, Guy Pfefferman, and April Harding.

Share on Facebook