Thoughts on Diversity in Tech

On April 28, I participated in a panel and Q & A on the intersection of race & technology.  My 2 co-panelists and I each had 15 minutes for a monologue regarding our personal experiences with how race and the tech industry intersect.  This post will excerpt my prepared remarks.

Excerpt of Prepared Remarks

How did I end up writing software for a living anyway?  I blame LEGOs, science fiction, and video games.  While I’ve never actually worked in the gaming industry, I’ve built software solutions for many others—newspapers, radio, e-commerce, government, healthcare, and finance. Tech industry salaries, stocks, and stock options have given me a lifestyle that could accurately be called  upper middle-class, including home ownership and annual domestic and international travel for work and pleasure (at least before the pandemic).
For all the financial rewards the industry has had to offer though, “writing software while black” has meant being comfortable with being the only one (or one of two) for the majority of my career–going all the way to my initial entry to the field.  As an undergraduate computer science (CS) major at the University of Maryland in the early to mid-nineties, I was on a first-name basis with all the other black CS majors in the department because there were never more than 10-12 of us in the entire department during my 4 1/2 years there–on a campus with tens of thousands of students.  In that time, I only ever knew of one black graduate student in CS.  My instructor in discrete structures at the time was Hispanic.  Even at a school as large as the University of Maryland, when I graduated in the winter of 1996, I was the only black graduate from the computer science department.
Unlike law, medicine, engineering, or  architecture, computer science is still a young enough field that the organizations which have grown up around it to support and affirm practitioners of color are much younger.  The National Society of Black Engineers for example, was formed in 1975.  The Information Technology Senior Management Forum (ITSMF), an organization with the goal of increasing black representation at senior levels in tech management, was formed in 1996.  The oldest founding year I could find for any of the existing tech organizations specifically supporting black coders (Black Girls Code) was 2011.  I’d already been a tech industry professional for 15 years at that point, and in every organization I’d worked for up to that point, I was either the only black software engineer on staff, or 1 of 2.  It would be another 2 years before I would join a company where there was more than one other black person on-staff in a software development role.
I’ve had project and/or people leadership responsibilities for 8-9 years of my over 20 years in tech.  As challenging as succeeding as an under-represented minority in tech has been, adding leadership responsibilities increased the scope of the challenge even more.  As rarely as I saw other black coders, black team leads were even scarcer until I joined my current company in 2017.  It basically took my entire career to find, but it is the only place I’ve ever worked where being black in tech is normal.  We regularly recruit from HBCUs.  We hire and promote black professionals in technical, analytical, managerial, and executive roles in tech.  There are multiple black women and women at the VP level here.  The diversity even extends to the board of directors–four of its members are black men, including the CEO of F5 Networks.
Perhaps most importantly–and contrary to the sorts of things we hear too often from people like James Damore and others about diversity requiring lower standards–this diverse workforce has helped build and maintain a high performance culture.  This publicly-traded company is regularly in the top 25 of Fortune Magazine’s annual best places to work rankings.  But this year–even in the midst of the pandemic–it jumped into the top 10 for the first time.
The company uses its size to the benefit of under-represented minorities in tech with business resource groups.  Two of the BRGs I belong to have provided numerous opportunities to network with other black associates, to recruit and be recruited for growth opportunities in other lines of business.  As a result, it’s the only company I’ve worked for in my entire career where I’ve had the ability to recruit black engineers to join my team.  These groups have even provided a safe space to vent and grieve regarding the deaths of unarmed black men and women at the hands of police officers.  When we learned that Ahmaud Arbery had been murdered, I had black coworkers I could talk about it with–all the up to the VP level down to the individual contributor level.  We were able to talk about George Floyd’s murder at the time, and in the aftermath of Derek Chauvin’s trial.  As long as these deaths have been happening, this is the only employer I’ve ever worked for where I know there is a like-minded community where I can talk through such issues with–as well as sympathetic allies.
Not only has this company put millions of dollars into organizations like the Equal Justice Initiative, they set up a virtual event for EJI’s founder, Bryan Stevenson,  to speak to us and field our questions.  Ijeoma Oluo and Dr. Henry Louis Gates, Jr have participated in Capital One events as well.  Capital One is just one of just three Palladium Partners with ITSMF.  I recently completed a program they created for us called the Leaders of Color Workshop for the purpose of helping black managers advance within the organization.
All the good things I’ve shared doesn’t mean it’s a perfect employer (as if such a thing existed).  I found it necessary to transfer to a different department and line of business in order to find a manager interested in helping me advance my career.  Talking to my classmates in the most recent workshop revealed quite a few stories of far more negative experiences than mine from people who have been part of company much longer than I have.   They’ve had at least a couple of instances of viral Medium posts from former employees whose experiences were far more negative than mine.  But at least in my experience, it’s been and continues to be a great place to be black in tech.
Because the majority of our workforce is women, and nearly 1/3rd of the staff comes from minority groups that are under-represented in tech, the company has done a pretty good job of avoiding the sort of missteps that can put you in the news for wrong reasons.  Seemingly just in time for the discussion we’re about to have, the founders of Basecamp (the very opinionated makers of the product of the same name and the HEY email client among other products) are taking their turns as the proverbial fish in a barrel due to their decision to follow the example of Coinbase in disallowing discussions of politics and social causes at work.  So it was very interesting to read the open letter published to them by Jane Yang, one of their employees currently on medical leave.  She writes in some detail about the founders’ decision to exclude hate speech and harassment from the initial use restrictions policy for their products.  Read Jason Fried’s initial post and David Hanson’s follow-up for fuller context.
Basecamp is a small example (just 60 employees), Coinbase a slightly larger one (1200+ employees), but they are good proxies both for many companies I’ve worked for and companies orders of magnitude larger like Facebook, Amazon, and Google who have recently been in the news for discriminatory treatment of underrepresented minorities in their workforce.  Their failures, and those of the tech industry at large to seriously address the lack of diversity in their recruiting and hiring practices has resulted and will continue to result in the creation of products that not only fail to adequately serve under-represented minorities, but actively cause harm.  In the same way monoculture in farming creates genetically uniform crops that are less-resistant to disease and pests, monoculture in corporate environments leads to group think, to more uniform, less-innovative products with a higher risk of automating and perpetuating existing biases.
I recently watched Coded Bias, a documentary available on Netflix (and PBS) that highlighted the failings of existing facial recognition technology and the dangers it poses–to people of color in particular (because it tends to be far more inaccurate with darker-skinned people) but to people in general.  Were it not for the work of Joy Buolamwini, a black woman research assistant in computer science at MIT, we might not have learned about these flaws until much later–if at all.  These dangers extend beyond facial recognition technology to the application of algorithms and machine learning to everything from sentencing and parole determinations, hiring and firing decisions, to mortgage, loan, and other credit decisions.  Particularly as a bank employee, I’m much more conscious of the impact that my work and that of my team could potentially have on the lives of black and brown bank customers.  Even though it’s outside the scope of my current team’s usual work, I’ve begun making efforts to learn more about the ML and artificial intelligence spaces, and to raise concerns with my senior leadership whenever our use of ML and AI is a topic of discussion.  Despite all the challenges we face being in tech as under-represented minorities, or women, or both, it is vital that more of us get in and stay in tech–and continue to raise the concerns that would otherwise be ignored by today’s tech leaders.  Current and future tech products are quite likely to be worse if we don’t.

The Minimum Wage Debate is Too Narrow and Small

Recently I’ve found myself having variations of the same conversation on social media regarding the minimum wage.  Those to my political left have made statements such as “if your business would fail if you paid workers $15/hour you’re exploiting them.”  Those to my political right–some current or former business owners, some not–argue that minimum wage increases had a definite impact on their bottom line.

I have two problems with the first argument: (1) it oversimplifies and trivializes a very serious issue, (2) these days, the arguers tend to aim it at small business owners.  Worker exploitation is real, and conflating every employer who follows the law when it comes to pay and other facets of employment harms the cause of combatting serious harms.  The outgoing Trump administration has been trying to reduce the wages of H-2A workers.  Undocumented workers in sectors like agriculture, food, home-based healthcare, and others fare even worse.  In some cases, drug addiction treatment has turned thousands of people into little more than indentured servants, with complicity from judges and state regulators.  Until recently, large corporations like Wal-Mart and Amazon evaded accountability for low worker pay and mistreatment despite having significant percentages of workers on food stamps and Medicaid and a high rate of worker injuries.

Another variation of the first argument takes a starting point in the past (like the 1960s) then says the minimum wage should be whatever the rate of inflation would have grown it to be between then and today.  If you go back to when Dr. Martin Luther King, Jr. was alive (for example), the minimum wage today “should” be $22/hour.  You can pick any point in time and say what the minimum wage should be based on inflation, but that’s not the same as grappling honestly with how industries have changed and/or how the nature of work has changed in the half-century plus since the civil rights era.

One challenge with the second argument is that the examples cited are typically restaurants or food services–businesses that operate at low margins and have high fixed costs in addition to being labor-intensive.  Even in that sector, the impacts of a $15/hour minimum wage are not necessarily what you might expect.  But not every business is the restaurant business, and a single sector cannot govern the parameters of debate for an issue that impacts the entire economy and the broader society get a broadly beneficial result.

At this point in the discussion, someone usually brings up automation, followed by someone mentioning universal basic income (UBI).  What I have said in the past, and will continue to say, is that automation is coming regardless of what the federal government, states, and/or localities do with the minimum wage.  As someone who has written software for a living for over 20 years, the essence of my line of work is automating things.  Sometimes software augments what people do by taking over rote or repetitive aspects of their jobs and freeing them up to do more value-added work.  But if an entire job is rote or repetitive, software can and does eliminate jobs.  The combination of software and robots are what enable some manufacturers to produce so many goods without the large number of workers they would have needed in the past.

Talking about UBI enlarges the conversation, but even then may not fully take on the nature of the relationship between government, business, and people.  We do not talk nearly often enough about how long the United States got by with a much less-robust social safety net than other countries because of how much responsibility employers used to take on for their employees.  Nor do we talk about the amount of additional control that gives employers over their employees–or the cracks in the system that can result from unemployment.  The usual response from the political right whenever there is any discussion of separating health care from employment is to cry “socialism”.  But the falseness of such charges can be easily exposed.  Capitalism seems to be alive and well in South Korea, and they have a universal healthcare system–a significant portion of which is privately funded.  Germany is another country where capitalism, universal healthcare, and private insurers seem to be co-existing just fine.

The conversation we need to have, as companies and their shareholders get richer, share fewer of those gains with their workers, and otherwise delegate responsibilities they used to keep as part of the social contract, is how the relationship between government, business, and people should change to reflect the current reality.  The rationale always given for taxing capital gains at a lower rate than wages was investment.  But as we’ve seen both in the pandemic, and in the corporate response to the big tax cut in 2017, corporate execs mostly pocketed the gains for themselves or did stock buybacks to further inflate their per-share prices.  Far from sharing any of the gains with workers, some corporations laid off workers instead.  Given ample evidence that preferential tax treatment for capital gains does not result in more investment, the preference should end.  People of working age should not be solely dependent on an employer or Medicare for their healthcare.  A model where public and private insurance co-exist for those people and isn’t tied to employment is where we should be headed as a society.  

We need to think much harder than we have about what has to change both to account for the deficiencies in our social safety net (that corporations will not fill), and an economy on its way to eliminating entire fields that employ a lot of people today.  Bill Gates advocated in favor of a tax on robots year ago.  The challenges of funding UBI and whether or not it’s possible to do that and continue to maintain the social safety net as it currently exists need to be faced head-on.  Talking about the minimum wage alone–even as multiple states and localities increase it well beyond the federal minimum–is not enough.

Résumé Shortening (and other résumé advice)

I saw a tweet from one of the best tech follows on Twitter (@raganwald) earlier today about the difficulty of shortening your résumé to five pages. While my career in tech is quite a bit shorter than his (and doesn’t include being a published author), I’ve been writing software for a living (and building/leading teams that do) long enough to need to shorten my own résumé to less than five pages.

While I’m certainly not the first person to do this, my (brute force) approach was to change the section titled “Professional Experience” to “Recent Professional Experience” and simply cut off any experience before a certain year. The general version of my résumé runs just 2 1/2 pages as a result of that simple change alone.

Other résumé advice I’ve followed over the years includes:

  • If there is a quantitative element to any of your accomplishments, lead with that. Prominently featured in my latest résumé are the annual dollar figures for fraud losses prevented by the team I lead (those figures exceeded $11 million in 2 consecutive years).
  • Don’t waste space on a résumé objective statement
  • Use bullet points instead of paragraphs to keep things short
  • Put your degree(s) at the bottom of the résumé instead of the top
  • Make your résumé discoverable via search engine. This bit of advice comes from my good friend Sandro Fouché, who started the CS program at University of Maryland a few years ahead of me (and has since become a CS professor). I followed the advice by adding a copy of my current résumé to this blog (though I only make it visible/searchable when I’m actively seeking new work). His advice definitely pre-dates the founding of LinkedIn, and may predate the point at which Google Search got really good as well.

Speaking of LinkedIn, that may be among the best reasons to keep your résumé on the shorter side. You can always put the entire thing on LinkedIn. As of this writing, the UI only shows a paragraph or so for your most recent professional experience. Interested parties have to click “…see more” to display more information on a specific experience, and “Show n more experiences” where n is the number of previous employers you’ve had. Stack Overflow Careers is another good place to maintain a profile (particularly if you’re active on Stack Overflow).

What I’m Thankful For

I have plenty to be thankful for this year. My 4-year-old twins are doing well–healthy, happy, and eating everything in sight. My parents, sister, and extended family are doing well. My wife is having some success with her consulting business. I’ve passed the two year mark at my current company and it continues to be the best environment I’ve been part of as a black technologist in my entire career so far.

I’m looking forward to continuing professional and personal growth in 2020 (and beyond) and wish those who may read this the same.

Owning My Words

After Scott Hanselman retweeted this blog post recently about owning your words, I’ve decided to get back into blogging (and hopefully spend less time on social media) after a long hiatus from an already-infrequent blogging schedule. Twitter in particular has probably consumed the bulk of my writing output from 2014 to now, with Tumblr hosting a few longer pieces on topics outside of tech.

I’m finding the process of coming with new topics that merit a blog post on a more regular basis a bit challenging, so I’ll probably start by revisiting older posts and using them as starting points for new work. The topics here will go back to having a clear tech connection, while other areas I’m interested in will get their own site. I bought a new domain recently that I like a lot better than the current .org that I may move this tech content to as well as a subdomain if I’m feeling especially ambitious.

Thoughts on the Damore Manifesto

I’ve shared a few articles on Facebook regarding the now infamous “manifesto” (available in full here) written by James Damore.  But I’m (finally) writing my own response to it because being black makes me part of a group even more poorly represented in computer science (to say nothing of other STEM fields) than women (though black women are even less represented in STEM fields).

One of my many disagreements with Damore’s work (beyond its muddled and poorly written argument) is how heavily it leans on citations of very old studies. Even if such old studies were relevant today, more current and relevant data debunks the citations Damore uses. To cite just two examples:

Per these statistics, women are not underrepresented at the undergraduate level in these technical fields and only slightly underrepresented once they enter the workforce.  So how is it that we get to the point where women are so significantly underrepresented in tech?  Multiple recent studies suggest that factors such as isolation, hostile male-dominated work environments, ineffective executive feedback, and a lack of effective sponsors lead women to leave science, engineering and technology fields at double the rate of their male counterparts.  So despite Damore’s protestations, women are earning entry-level STEM degrees at roughly the same rate as men and are pushed out.

Particularly in the case of computing, the idea that women are somehow biologically less-suited for software development as a field is proven laughably false by simply looking at the history of computing as a field.  Before computers were electro-mechanical machines, they were actually human beings–often women. The movie Hidden Figures dramatized the role of black women in the early successes of the manned space program, but many women were key to advances in computing both before and after that time.  Women authored foundational work in computerized algebra, wrote the first compiler, were key to the creation of Smalltalk (the first object-oriented programming language), helped pioneer information retrieval and natural language process, and much more.

My second major issue with the paper is its intellectual dishonesty.  The Business Insider piece I linked earlier covers the logical fallacy at the core of Damore’s argument very well.  This brilliant piece by Dr. Cynthia Lee (computer science lecturer at Stanford) does it even better and finally touches directly on the topic I’m headed to next: race.  Dr. Lee notes quite insightfully that Damore’s citations on biological differences don’t extend to summarizing race and IQ studies as an explanation for the lack of black software engineers (either at Google or industry-wide).  I think this was a conscious omission that enabled at least some in the press who you might expect to know better (David Brooks being one prominent example) to defend this memo to the point of saying the CEO should resign.

It is also notable that though Damore claims to “value diversity and inclusion”, he objects to every means that Google has in place to foster them.  His objections to programs that are race or gender-specific struck a particular nerve with me as a University of Maryland graduate who was attending the school when the federal courts ruled the Benjamin Banneker Scholarship could no longer be exclusively for black students.  The University of Maryland had a long history of discrimination against blacks students (including Thurgood Marshall, most famously).  The courts ruled this way despite the specific history of the school (which kept blacks out of the law school until 1935 and the rest of the university until 1954.  In the light of that history, it should not be a surprise that you wouldn’t need an entire hand to count the number of black graduates from the School of Computer, Mathematical and Physical Sciences in the winter of 1996 when I graduated.  There were only 2 or 3 black students, and I was one of them (and I’m not certain the numbers would have improved much with a spring graduation).

It is rather telling how seldom preferences like legacy admissions at elite universities (or the preferential treatment of the children of large donors) are singled out for the level of scrutiny and attack that affirmative action receives.  Damore and others of his ilk who attack such programs never consider how the K-12 education system of the United States, funded by property taxes, locks in the advantages of those who can afford to live in wealthy neighborhoods (and the disadvantages of those who live in poor neighborhoods) as a possible cause for the disparities in educational outcomes.

My third issue with Damore’s memo is the assertion that Google’s hiring practices can effectively lower the bar for “diversity” candidates.  I can say from my personal experience with at least parts of the interviewing processes at Google (as well as other major names in technology like Facebook and Amazon) that the bar to even get past the first round, much less be hired is extremely high.  They were, without question, the most challenging interviews of my career to date (19 years and counting). A related issue with representation (particularly of blacks and Hispanics) at major companies like these is the recruitment pipeline.  Companies (and people who were computer science undergrads with me who happen to be white) often argue that schools aren’t producing enough black and Hispanic computer science graduates.  But very recent data from the Department of Education seems to indicate that there are more such graduates than companies acknowledge. Furthermore, these companies all recruit from the same small pool of exclusive colleges and universities despite the much larger number of schools that turn out high quality computer science graduates on an annual basis (which may explain the multitude of social media apps coming out of Silicon Valley instead of applications that might meaningfully serve a broader demographic).

Finally, as Yonatan Zunger said quite eloquently, Damore appears to not understand engineering.  Nothing of consequence involving software (or a combination of software and hardware) can be built successfully without collaboration.  The larger the project or product, the more necessary collaboration is.  Even the software engineering course that all University of Maryland computer science students take before they graduate requires you to work with a team to successfully complete the course.  Working effectively with others has been vital for every system I’ve been part of delivering, either as a developer, systems analyst, dev lead or manager.

As long as I have worked in the IT industry, regardless of the size of the company, it is still notable when I’m not the only black person on a technology staff.  It is even rarer to see someone who looks like me in a technical leadership or management role (and I’ve been in those roles myself a mere 6 of my 19 years of working).  Damore and others would have us believe that this is somehow the just and natural order of things when nothing could be further from the truth.  If “at-will employment” means anything at all, it appears that Google was within its rights to terminate Damore’s employment if certain elements of his memo violated the company code of conduct.  Whether or not Damore should have been fired will no doubt continue to be debated.  But from my perspective, the ideas in his memo are fairly easily disproven.

Podcast Episodes Worth Hearing

Since I transitioned from a .NET development role into a management role 2 years ago, I hadn’t spent as much time as I used to listening to podcasts like Hanselminutes and .NET Rocks.  My commute took longer than usual today though, so I listened to two Hanselminutes episodes from December 2016.  Both were excellent, so I’m thinking about how to apply what I’ve heard directing an agile team on my current project.

In Hanselminutes episode 556, Scott Hanselman interviews Amir Rajan.  While the term polyglot programmer is hardly new, Rajan’s opinions on what programming languages to try next based on the language you know best were quite interesting.  While my current project is J2EE-based, between the web interface and test automation tools, there are plenty of additional languages that my team and others have to work in (including JavaScript, Ruby, Groovy, and Python).

Hanselminutes episode 559 was an interview with Angie Jones.  I found this episode particularly useful because the teams working on my current project include multiple automation engineers.  Her idea to include automation in the definition of done is an excellent one.  I’ll definitely be sharing her slide deck on this topic with my team and others..

Software Development Roles: Lead versus Manager

I’ve held the title of development lead and development manager at different points in my technology career. With the benefit of hindsight, one of the roles advertised and titled as the latter was actually the former. One key difference between the two roles boils down to how much of your time you spend writing code. If you spend half or more your time writing code, you’re a lead, even if your business cards have “manager” somewhere in the title. If you spend significantly less than half your time writing code, then the “manager” in your title is true to your role. When I compare my experience between the two organizations, the one that treats development lead and development manager as distinct roles with different responsibilities has been not only been a better work environment for me personally, but has been more successful at consistently delivering software that works as advertised.

A company can have any number of motivations for giving management responsibilities to lead developers. The organization may believe that a single person can be effective both in managing people and in delivering production code. They may have a corporate culture where only minimal amount of management is needed and developers are self-directed. Perhaps their implementation of a flat organizational structure means that developers take on multiple tasks beyond development (not uncommon in startup environments). If a reasonably-sized and established company gives lead and management responsibilities to an individual developer or developers however, it is also possible that there are budgetary motivations for that decision. The budgetary motivation doesn’t make a company bad (they’re in business to make money after all). It is a factor worth considering when deciding whether or not a company is good for you and your career goals.

Being a good lead developer is hard. In addition to consistently delivering high-quality code, you need to be a good example and mentor to less-senior developers. A good lead developer is a skilled troubleshooter (and guide to other team members in the resolution of technical problems). Depending on the organization, they may hold significant responsibility for application architecture. Being a good development manager is also hard. Beyond the reporting tasks that are part of every management role, they’re often responsible for removing any obstacles that are slowing or preventing the development team from doing work. They also structure work and assign it in a way that contributes to timely delivery of functionality. The best development managers play an active role in the professional growth of developers on their team, along with annual reviews. Placing the responsibility for these two challenging roles on a single person creates a role that is incredibly demanding and stressful. Unless you are superhuman, sooner or later your code quality, your effectiveness as a manager, or both will suffer. That outcome isn’t good for you, your direct reports, or the company you work for.

So, if you’re in the market for a new career opportunity, understand what you’re looking for. If a development lead position is what you want, scrutinize the job description. Ask the sort of questions that will make clear that a role being offered is truly a development lead position. If you desire a development management position, look at the job description. If hands-on development is half the role or more, it’s really a development lead position. If you’re indeed superhuman (or feel the experience is too valuable to pass up), go for it. Just be aware of the size of the challenge you’re taking on and the distinct possibility of burnout. If you’re already in a job that was advertised as a management position but is actually a lead position, learn to delegate. This will prove especially challenging if you’re a skilled enough developer to have landed a lead role, but allowing individual team members to take on larger roles in development will create the bandwidth you need to spend time on the management aspects of your job. Finally, if you’re an employer staffing up a new development team or re-organizing existing technology staff, ensure the job descriptions for development lead and development manager are separate. Whatever your software product, the end result will be better if you take this approach.

Security Breaches and Two-Factor Authentication

It seems the news has been rife with stories of security breaches lately.  As a past and present federal contractor, the OPM breach impacted me directly.  That and one other breach impacted my current client.  The lessons I took from these and earlier breaches were:

  1. Use a password manager
  2. Enable 2-factor authentication wherever it’s offered

To implement lesson 1, I use 1Password.  It runs on every platform I use (Mac OS X, iOS and Windows), and has browser plug-ins for the browsers I use most (Chrome, Safari, IE).  Using the passwords 1Password generates means I no longer commit the cardinal security sin of reusing passwords across multiple sites.  Another nice feature specific to 1Password is Watchtower.  If a site where you have a username and password is compromised, the software will indicate that site is vulnerable so you know to change your password.  1Password even has a feature to flag sites with the Heartbleed vulnerability.

The availability of two-factor authentication has been growing (somewhat unevenly, but any growth is good), but it wasn’t until I responded to a tweet from @felixsalmon asking about two-factor authentication that I discovered how loosely some people define two-factor authentication.  According to this New York Times interactive piece, most U.S. banks offer two-factor authentication.  That statement can only be true if “two-factor” is defined as “any item in addition to a password”.  By that loose standard, most banks do offer two-factor authentication because the majority of them will prompt you for an additional piece of “out of wallet” information if you attempt to log in from a device with an IP address they don’t recognize.  Such out-of-wallet information could be a parent’s middle name, your favorite food, the name of your first pet, or some other piece of information that only you know.  While it’s better than nothing, I don’t consider it true two-factor authentication because:

  1. Out-of-wallet information has to be stored
  2. The out-of-wallet information might be stored in plain-text
  3. Even if out-of-wallet information is stored hashed, hashed & salted, or encrypted with one bank, there’s no guarantee that’s true everywhere the information is stored (credit bureaus, health insurers, other financial institutions you have relationships with, etc)

One of the things that seems clear after the Get Transcript breach at IRS is that the thieves had access to the out-of-wallet information of their victims, either because they purchased the information, stole it, or found it on social media sites they used.

True two-factor authentication requires a time-limited, randomly-generated piece of additional information that must be provided along with a username and password to gain access to a system.  Authentication applications like the ones provided by Google or Authy provide a token (a 6-digit number) that is valid for 30-60 seconds.  Some systems provide this token via SMS so a specific application isn’t required.  By this measure, the number of banks and financial institutions that support is quite a bit smaller.  One of the other responses to the @felixsalmon tweet was this helpful URL: https://twofactorauth.org/.  The list covers a lot of ground, including domain registrars and cryptocurrencies, but might not cover the specific companies and financial institutions you work with.  In my case, the only financial institution I currently work with that offers true two-factor authentication is my credit union–Tower Federal Credit Union.  Hopefully every financial institution and company that holds our personal information will follow suit soon.