Thoughts on the Damore Manifesto

I’ve shared a few articles on Facebook regarding the now infamous “manifesto” (available in full here) written by James Damore.  But I’m (finally) writing my own response to it because being black makes me part of a group even more poorly represented in computer science (to say nothing of other STEM fields) than women (though black women are even less represented in STEM fields).

One of my many disagreements with Damore’s work (beyond its muddled and poorly written argument) is how heavily it leans on citations of very old studies. Even if such old studies were relevant today, more current and relevant data debunks the citations Damore uses. To cite just two examples:

Per these statistics, women are not underrepresented at the undergraduate level in these technical fields and only slightly underrepresented once they enter the workforce.  So how is it that we get to the point where women are so significantly underrepresented in tech?  Multiple recent studies suggest that factors such as isolation, hostile male-dominated work environments, ineffective executive feedback, and a lack of effective sponsors lead women to leave science, engineering and technology fields at double the rate of their male counterparts.  So despite Damore’s protestations, women are earning entry-level STEM degrees at roughly the same rate as men and are pushed out.

Particularly in the case of computing, the idea that women are somehow biologically less-suited for software development as a field is proven laughably false by simply looking at the history of computing as a field.  Before computers were electro-mechanical machines, they were actually human beings–often women. The movie Hidden Figures dramatized the role of black women in the early successes of the manned space program, but many women were key to advances in computing both before and after that time.  Women authored foundational work in computerized algebra, wrote the first compiler, were key to the creation of Smalltalk (the first object-oriented programming language), helped pioneer information retrieval and natural language process, and much more.

My second major issue with the paper is its intellectual dishonesty.  The Business Insider piece I linked earlier covers the logical fallacy at the core of Damore’s argument very well.  This brilliant piece by Dr. Cynthia Lee (computer science lecturer at Stanford) does it even better and finally touches directly on the topic I’m headed to next: race.  Dr. Lee notes quite insightfully that Damore’s citations on biological differences don’t extend to summarizing race and IQ studies as an explanation for the lack of black software engineers (either at Google or industry-wide).  I think this was a conscious omission that enabled at least some in the press who you might expect to know better (David Brooks being one prominent example) to defend this memo to the point of saying the CEO should resign.

It is also notable that though Damore claims to “value diversity and inclusion”, he objects to every means that Google has in place to foster them.  His objections to programs that are race or gender-specific struck a particular nerve with me as a University of Maryland graduate who was attending the school when the federal courts ruled the Benjamin Banneker Scholarship could no longer be exclusively for black students.  The University of Maryland had a long history of discrimination against blacks students (including Thurgood Marshall, most famously).  The courts ruled this way despite the specific history of the school (which kept blacks out of the law school until 1935 and the rest of the university until 1954.  In the light of that history, it should not be a surprise that you wouldn’t need an entire hand to count the number of black graduates from the School of Computer, Mathematical and Physical Sciences in the winter of 1996 when I graduated.  There were only 2 or 3 black students, and I was one of them (and I’m not certain the numbers would have improved much with a spring graduation).

It is rather telling how seldom preferences like legacy admissions at elite universities (or the preferential treatment of the children of large donors) are singled out for the level of scrutiny and attack that affirmative action receives.  Damore and others of his ilk who attack such programs never consider how the K-12 education system of the United States, funded by property taxes, locks in the advantages of those who can afford to live in wealthy neighborhoods (and the disadvantages of those who live in poor neighborhoods) as a possible cause for the disparities in educational outcomes.

My third issue with Damore’s memo is the assertion that Google’s hiring practices can effectively lower the bar for “diversity” candidates.  I can say from my personal experience with at least parts of the interviewing processes at Google (as well as other major names in technology like Facebook and Amazon) that the bar to even get past the first round, much less be hired is extremely high.  They were, without question, the most challenging interviews of my career to date (19 years and counting). A related issue with representation (particularly of blacks and Hispanics) at major companies like these is the recruitment pipeline.  Companies (and people who were computer science undergrads with me who happen to be white) often argue that schools aren’t producing enough black and Hispanic computer science graduates.  But very recent data from the Department of Education seems to indicate that there are more such graduates than companies acknowledge. Furthermore, these companies all recruit from the same small pool of exclusive colleges and universities despite the much larger number of schools that turn out high quality computer science graduates on an annual basis (which may explain the multitude of social media apps coming out of Silicon Valley instead of applications that might meaningfully serve a broader demographic).

Finally, as Yonatan Zunger said quite eloquently, Damore appears to not understand engineering.  Nothing of consequence involving software (or a combination of software and hardware) can be built successfully without collaboration.  The larger the project or product, the more necessary collaboration is.  Even the software engineering course that all University of Maryland computer science students take before they graduate requires you to work with a team to successfully complete the course.  Working effectively with others has been vital for every system I’ve been part of delivering, either as a developer, systems analyst, dev lead or manager.

As long as I have worked in the IT industry, regardless of the size of the company, it is still notable when I’m not the only black person on a technology staff.  It is even rarer to see someone who looks like me in a technical leadership or management role (and I’ve been in those roles myself a mere 6 of my 19 years of working).  Damore and others would have us believe that this is somehow the just and natural order of things when nothing could be further from the truth.  If “at-will employment” means anything at all, it appears that Google was within its rights to terminate Damore’s employment if certain elements of his memo violated the company code of conduct.  Whether or not Damore should have been fired will no doubt continue to be debated.  But from my perspective, the ideas in his memo are fairly easily disproven.

Podcast Episodes Worth Hearing

Since I transitioned from a .NET development role into a management role 2 years ago, I hadn’t spent as much time as I used to listening to podcasts like Hanselminutes and .NET Rocks.  My commute took longer than usual today though, so I listened to two Hanselminutes episodes from December 2016.  Both were excellent, so I’m thinking about how to apply what I’ve heard directing an agile team on my current project.

In Hanselminutes episode 556, Scott Hanselman interviews Amir Rajan.  While the term polyglot programmer is hardly new, Rajan’s opinions on what programming languages to try next based on the language you know best were quite interesting.  While my current project is J2EE-based, between the web interface and test automation tools, there are plenty of additional languages that my team and others have to work in (including JavaScript, Ruby, Groovy, and Python).

Hanselminutes episode 559 was an interview with Angie Jones.  I found this episode particularly useful because the teams working on my current project include multiple automation engineers.  Her idea to include automation in the definition of done is an excellent one.  I’ll definitely be sharing her slide deck on this topic with my team and others..

Software Development Roles: Lead versus Manager

I’ve held the title of development lead and development manager at different points in my technology career. With the benefit of hindsight, one of the roles advertised and titled as the latter was actually the former. One key difference between the two roles boils down to how much of your time you spend writing code. If you spend half or more your time writing code, you’re a lead, even if your business cards have “manager” somewhere in the title. If you spend significantly less than half your time writing code, then the “manager” in your title is true to your role. When I compare my experience between the two organizations, the one that treats development lead and development manager as distinct roles with different responsibilities has been not only been a better work environment for me personally, but has been more successful at consistently delivering software that works as advertised.

A company can have any number of motivations for giving management responsibilities to lead developers. The organization may believe that a single person can be effective both in managing people and in delivering production code. They may have a corporate culture where only minimal amount of management is needed and developers are self-directed. Perhaps their implementation of a flat organizational structure means that developers take on multiple tasks beyond development (not uncommon in startup environments). If a reasonably-sized and established company gives lead and management responsibilities to an individual developer or developers however, it is also possible that there are budgetary motivations for that decision. The budgetary motivation doesn’t make a company bad (they’re in business to make money after all). It is a factor worth considering when deciding whether or not a company is good for you and your career goals.

Being a good lead developer is hard. In addition to consistently delivering high-quality code, you need to be a good example and mentor to less-senior developers. A good lead developer is a skilled troubleshooter (and guide to other team members in the resolution of technical problems). Depending on the organization, they may hold significant responsibility for application architecture. Being a good development manager is also hard. Beyond the reporting tasks that are part of every management role, they’re often responsible for removing any obstacles that are slowing or preventing the development team from doing work. They also structure work and assign it in a way that contributes to timely delivery of functionality. The best development managers play an active role in the professional growth of developers on their team, along with annual reviews. Placing the responsibility for these two challenging roles on a single person creates a role that is incredibly demanding and stressful. Unless you are superhuman, sooner or later your code quality, your effectiveness as a manager, or both will suffer. That outcome isn’t good for you, your direct reports, or the company you work for.

So, if you’re in the market for a new career opportunity, understand what you’re looking for. If a development lead position is what you want, scrutinize the job description. Ask the sort of questions that will make clear that a role being offered is truly a development lead position. If you desire a development management position, look at the job description. If hands-on development is half the role or more, it’s really a development lead position. If you’re indeed superhuman (or feel the experience is too valuable to pass up), go for it. Just be aware of the size of the challenge you’re taking on and the distinct possibility of burnout. If you’re already in a job that was advertised as a management position but is actually a lead position, learn to delegate. This will prove especially challenging if you’re a skilled enough developer to have landed a lead role, but allowing individual team members to take on larger roles in development will create the bandwidth you need to spend time on the management aspects of your job. Finally, if you’re an employer staffing up a new development team or re-organizing existing technology staff, ensure the job descriptions for development lead and development manager are separate. Whatever your software product, the end result will be better if you take this approach.

Security Breaches and Two-Factor Authentication

It seems the news has been rife with stories of security breaches lately.  As a past and present federal contractor, the OPM breach impacted me directly.  That and one other breach impacted my current client.  The lessons I took from these and earlier breaches were:

  1. Use a password manager
  2. Enable 2-factor authentication wherever it’s offered

To implement lesson 1, I use 1Password.  It runs on every platform I use (Mac OS X, iOS and Windows), and has browser plug-ins for the browsers I use most (Chrome, Safari, IE).  Using the passwords 1Password generates means I no longer commit the cardinal security sin of reusing passwords across multiple sites.  Another nice feature specific to 1Password is Watchtower.  If a site where you have a username and password is compromised, the software will indicate that site is vulnerable so you know to change your password.  1Password even has a feature to flag sites with the Heartbleed vulnerability.

The availability of two-factor authentication has been growing (somewhat unevenly, but any growth is good), but it wasn’t until I responded to a tweet from @felixsalmon asking about two-factor authentication that I discovered how loosely some people define two-factor authentication.  According to this New York Times interactive piece, most U.S. banks offer two-factor authentication.  That statement can only be true if “two-factor” is defined as “any item in addition to a password”.  By that loose standard, most banks do offer two-factor authentication because the majority of them will prompt you for an additional piece of “out of wallet” information if you attempt to log in from a device with an IP address they don’t recognize.  Such out-of-wallet information could be a parent’s middle name, your favorite food, the name of your first pet, or some other piece of information that only you know.  While it’s better than nothing, I don’t consider it true two-factor authentication because:

  1. Out-of-wallet information has to be stored
  2. The out-of-wallet information might be stored in plain-text
  3. Even if out-of-wallet information is stored hashed, hashed & salted, or encrypted with one bank, there’s no guarantee that’s true everywhere the information is stored (credit bureaus, health insurers, other financial institutions you have relationships with, etc)

One of the things that seems clear after the Get Transcript breach at IRS is that the thieves had access to the out-of-wallet information of their victims, either because they purchased the information, stole it, or found it on social media sites they used.

True two-factor authentication requires a time-limited, randomly-generated piece of additional information that must be provided along with a username and password to gain access to a system.  Authentication applications like the ones provided by Google or Authy provide a token (a 6-digit number) that is valid for 30-60 seconds.  Some systems provide this token via SMS so a specific application isn’t required.  By this measure, the number of banks and financial institutions that support is quite a bit smaller.  One of the other responses to the @felixsalmon tweet was this helpful URL: https://twofactorauth.org/.  The list covers a lot of ground, including domain registrars and cryptocurrencies, but might not cover the specific companies and financial institutions you work with.  In my case, the only financial institution I currently work with that offers true two-factor authentication is my credit union–Tower Federal Credit Union.  Hopefully every financial institution and company that holds our personal information will follow suit soon.

Which Programming Language(s) Should I Learn?

I had an interesting conversation with a friend of mine (a computer science professor) and one of his students last week.  Beyond the basic which language(s) question were a couple more intriguing ones:

  1. If you had to do it all over again, would you still stick with the Microsoft platform for your entire development career?
  2. Will Microsoft be relevant in another ten years?

The first question I hadn’t really contemplated in quite some time.  I distinctly recall a moment when there was a choice between two projects at the place where I was working–one project was a Microsoft project (probably ASP, VB6 and SQL Server) and the other one wasn’t (probably Java).  I chose the former because I’d had prior experience with all three of the technologies on the Microsoft platform and none with the others.  I probably wanted an early win at the company and picking familiar technology was the quickest way to accomplish that.  A couple of years later (in 2001), I was at another company and took them up on an opportunity to learn about .NET (which at the time was still in beta) from the people at DevelopMentor.  It only took one presentation by Don Box to convince me that .NET (and C#) were the way to go.  While it would be two more years before I wrote and deployed a working C# application to production, I’ve been writing production applications (console apps, web forms, ASP.NET MVC) in C# from then to now.  While it’s difficult to know for sure how that other project (or my career) would have turned out had I gone the Java route instead of the Microsoft route, I suspect the Java route would have been better.

One thing that seemed apparent even in 1999 was that Java developers (the good ones anyway) had a great grasp of object-oriented design (the principles Michael Feathers would apply the acronym SOLID to).  In addition, quite a number of open source and commercial software products were being built in Java.  The same could not be said of C# until much later.

To the question of whether Microsoft will still be relevant in another ten years, I believe the answer is yes.  With Satya Nadella at the helm, Microsoft seems to be doubling-down on their efforts to maintain and expand their foothold in the enterprise space.  There are still tons of business of various sizes (not to mention state governments and the federal government) that view Microsoft as a familiar and safe choice both for COTS solutions and custom solutions.  So I expect it to remain possible to have a long and productive career writing software with the Microsoft platform and tools.

As more and more software is written for the web (and mobile browsers), whatever “primary” language a developer chooses (whether Java, C#, or something else altogether), they would be wise to learn JavaScript in significant depth.  One of the trends I noticed over the past couple of years of regularly attending .NET user groups, fewer and fewer of the talks had much to do with the intricacies and syntactic tricks of Microsoft-specific technologies like C# or LINQ.  There would be talks about Bootstrap, Knockout.js, node.js, Angular, and JavaScript.  Multiple presenters, including those who worked for Microsoft partners advocated quite effectively for us to learn these technologies in addition to what Microsoft put on the market in order to help us make the best, most flexible and responsive web applications we could.  Even if you’re writing applications in PHP or Python, JavaScript and JavaScript frameworks are becoming a more significant part of the web every day.

One other language worth knowing is SQL.  While NoSQL databases seem to have a lot of buzz these days, the reality is that there is tons of structured, relational data in companies and governments of every size.  There are tons of applications that still remain to be written (not to mention the ones in active use and maintenance) that expose and manipulate data stored in Microsoft (or Sybase) SQL Server, Oracle, MySQL, and Postgresql.  Many of the so-called business intelligence projects and products today have a SQL database as one of any number of data sources.

Perhaps the best advice about learning programming languages comes from The Pragmatic Programmer:

Learn at least one new language every year.

One of a number of useful things about a good computer science program is that after teaching you fundamentals, they push you to apply those fundamentals in multiple programming languages over the course of a semester or a year.  Finishing a computer science degree should not mean the end of striving to learn new languages.  They give us different tools for solving similar problems–and that ultimately helps make our code better, regardless of what language we’re writing it in.

Everyone is Junior at Something–Even You

Hanselminutes #427  was an excellent interview with Jonathan Barronville, the author (perhaps the most intelligent and articulate 19-year-old I’ve ever heard) of this article on Medium.  The discussion covered a lot of ground, and posed a number of thought-provoking questions.  Three of the questions struck me as especially important.

What is senior?

In the podcast, Hanselman suggested three criteria: years in industry, years writing code in a language, and age.  Since I’ve been in the industry for 18 years, have been writing production C# code for about 10 of them, and turned 40 at the beginning of the year, those criteria argue in favor of me being considered senior.  Those numbers can also work against me in a way (and not just because of the field’s well-known problems with age discrimination).  Before I took my current job (over 2 years ago), I hadn’t written any production ASP.NET MVC, jQuery or knockout.js.  The last time I’d written any production JavaScript before then was before also jQuery and node.js even existed.  So from the perspective of those technologies, I was junior (and still am in some respects).

While industry today seems to have a fetish for young developers, there is merit to that interest in one respect.  Men and women entering the industry right now, whether they’re fresh out of college (or even younger) are too young to have any memory of a world where Google didn’t exist.  They’re too young to remember a world before web pages.  Some of them have been writing software for the majority of their lives.  It’s natural to them in a way it wasn’t for me because I had to go to college to get access to really good computers and high-speed Internet.

That said, the number of years (or lack of them) isn’t an advantage or disadvantage if you haven’t had the sort of experiences you can grow from as a developer (and learned the right lessons from them).  Regardless of what age you were when you had the experiences, if you’ve had to build software that solved enterprise-level problems, dealt with scaling, refactoring and enhancement of large systems, or integration of systems, both succeeding and failing at addressing those challenges are what really make a senior developer.  More time in industry may give someone more opportunities to have those experiences, but if they haven’t had them, they’ve just been writing software for a long time.

What is the rush to be senior?

Hanselman made a comparison between tradesmen like carpenters and plumbers (who have to work as apprentices for 3-5 years and pass an exam before they can become journeymen) and our industry, where someone can have senior in their title without much experience.  While some of it (if not most of it) has to do with pay, there are drawbacks.  Because our field is relatively young in the grand scheme of things, there aren’t universally accepted standards and practices (especially compared to some branches of engineering, which have hundreds of years of history).  We place too much of a premium on speed, and not enough on depth of experience (and the time it takes to earn it).  One of the end results of this is the sort of interviews I’ve experienced on a regular basis.  I’ve seen tons of resumes from people with senior titles who are stymied by interview exercises that ask fairly basic questions (on the level of the Fizz Buzz test).

I’d been working for less than four years when I first got “senior” added to my title.  It came with a nice raise (which I was certainly happy about) and more responsibilities (team leadership), but I certainly wasn’t senior in terms of software development experience after that short a period of time.  Not unlike what the classic Peter Norvig essay suggests about teaching yourself programming in ten years, that’s about how long it took for me to see myself as legitimately senior from an experience perspective.  Even now, having spent over 17 years in industry, I’m sure there are workplaces where I wouldn’t be considered senior because I haven’t architected super-large systems or led a team with dozens of people—and I’m alright with that.  I’ve matured enough after this amount of time to be more concerned with what kind of work I’m doing (and what I’m learning) than I am with what title an employer gives me.

Are we okay with not knowing something and then learning?

This question points in two directions:

  • are we okay with ourselves not knowing something and then having to learn it?
  • are we okay with others not knowing something and then having to learn it?

For me, the answer to the first question is yes.  In the case jQuery and knockout.js (and other unfamiliar technologies like RavenDB), I had to be okay with not knowing.  Doing my own research, and not being too proud to ask a lot of questions younger developers on the team who clearly had more experience with those technologies was necessary to advance to the point where I could do all that work myself.

The answer to the second question is the key to many of the problems with our industry, particularly when it comes to issues of gender and diversity.  Too many in our industry go beyond not being okay with someone not knowing something and cross the line to being condescending, rude, and even hostile.  I’ve been on the receiving end of that kind of treatment more often than I care to remember.  Too many workplaces allow people with superior technical skills to act like children instead of adults in interacting with their co-workers.  There is more and more being written about the sexism in the industry (pieces like this one, and this one), but not nearly enough on the negative impact that environment has on the ability and desires of others to learn and grow as professionals.  I think the persistently low numbers of minorities and women in the tech industry has as much to do with the perception (if not reality) that a lot of tech companies have high “a**hole thresholds” as it does with insufficient exposure to math and science in school.

The bottom line for me from the article and the podcast is not only that everyone in this industry starts out as junior level, but that technology changes so quickly that we will all be junior at something at multiple points throughout our careers in the tech industry.  We need to keep that knowledge in mind so that we can be more patient with ourselves as we learn and with those we work with as they learn.

Learning New Programming Languages

Important advice from The Pragmatic Programmer (page 62):

“Learn at least one new language every year.”

It’s advice I’ve been trying to follow more seriously since I first started reading the book last month.  One site I’ve been using to learn more JavaScript that’s proven to be pretty cool for that is codewars.com (thanks Dean).  The katas are small enough that it doesn’t take a ton of research to figure out how to do something in a language you’re learning.  Once you’ve developed a working solution, you can see how others have solved it (and compare your solution to theirs).  Since you write and test the katas in the browser, there’s none of the overhead of firing up an editor or uploading your solution somewhere.  Ideally I’d be writing a few katas per day, but a few katas a week are what I’ve been able to manage so far.

Since Apple introduced yet another language (Swift) at WWDC earlier this week, I’m starting to learn that language as well.  So far, the syntax is a lot easier to grasp than Objective-C.  The only real hassle with writing the code examples as I read the language guide is that XCode 6 Beta crashes every half hour.

With both languages (or any language really), the real leap forward comes from building something non-trivial with them. Figuring out what that non-trivial something will be is another challenge altogether. I wish there were a sites like codewars.com (or Project Euler) that put out larger-scale problems intended to be solved with software. Being part of the developer interview loop at work pushed me to create a few problems of that sort for use in interviewing developer candidates, but none of those exercises require more than 20 minutes of work. More significant challenges should make it useful to explore features beyond basic control flow and data structures.

When Third-Party Dependencies Attack

Last week provided our office with an inconvenient lesson in what can happen when third-party dependencies break in unanticipated ways.  PostSharp is a key third-party dependency in the line of business web application we sell.  On the morning of May 20, our continuous integration server (we use TeamCity) began indicating a build failure with the following message:

  • PostSharp.3.1.34\tools\PostSharp.targets(313, 5): error MSB6006: “postsharp.4.0-x86.exe” exited with code -199.

The changed file was a Razor template file–nothing at all to do with PostSharp.  Only one person on our development team was experiencing this error on their local machine, but the end result–not being able to compile the solution locally–pretty much eliminated the possibility of that person being productive for the rest of the day.  As the day progressed, the CI server began showing exactly the same error in other branches–even with no changes to code.  It wasn’t until the next day that we received the explanation (and a resolution).

Reading the entire explanation is worthwhile, but the key reason for the failure is this:

“we … assumed that all failures would be in the form of a managed exceptions. We did not anticipate that the library would terminate the process.”

The fail-safe code that PostSharp implemented around a third-party licensing component assumed all failures would be managed exceptions (which they could catch and deal with accordingly).  Instead, this third-party component simply terminated the process.  The end result–any of their customers using the latest version of PostSharp couldn’t compile any solution that included it.  There’s no way of knowing for sure how many hours of productivity (and money) was lost as a result of this component, but the amounts were probably significant.  To his credit, the CEO apologized, his development team removed the offending dependency and sacrificed the feature which depended on it.

There are many lessons to be learned (or re-learned) from what we experienced with PostSharp, but I’ll talk about three. First, if a third-party dependency is critical to your application and has a licensing option that includes support, it is best to pay the money so that you have recourse if and when there’s an issue. On the Microsoft stack, this is proving increasing costly as more third-party .NET libraries and tools raise their prices (there are quite a few formerly free .NET tools that have been purchased by companies and re-released as rather costly closed-source software).

Second, whether or not there are licensing costs, it’s a good idea to have more than one option for critical third-party dependencies. In the case of aspect-oriented programming on .NET, there are a number of alternatives to PostSharp. The vendor is even confident enough to list them on their website. So if licensing costs are significant enough a concern, it may be better to choose an open-source option that is less-convenient but gives you the ability to customize it than a paid option which doesn’t (and yokes you to a specific vendor).

Third, It may make sense to avoid taking on a third-party dependency altogether. When it comes to the Microsoft stack, it’s likely that they offer a framework or API with at least some of the capabilities you need for your solution. In the case of AOP, Microsoft offers Unity to support those capabilities. Especially in the case where you’re only considering the free tier of capabilities for a third-party dependency where Microsoft offers a product, if that free tier functionality isn’t a significant improvement, it may be best to stick with the Microsoft option.

 

Another Tale of a Developer Interview Loop

There are literally millions of links on the internet about interviewing developers–interview questions, posts on why the way you do it is wrong, and even guides on making it through an interview process.  Here’s one more, about the interview process at my current employer.

Before I joined my current company (back in 2012), before I even set foot onsite, there was a phone screen.  At the time, there seemed to be one developer responsible for phone screening every prospective developer candidate for the entire company.  If memory serves, the phone screen lasted around 45 minutes.  The questions were challenging, but not impossible to answer.  When the in-person interviews were scheduled, I had no idea what I was in for.  Over the course of 5 hours, I spoke to 7 different people who had some role in 2 or 3 different projects or products within the company.  The first hour, they put a laptop with Visual Studio in front of me and asked me to write a console app that performed three different tasks (I won’t go into too much more detail as we still use the same exercise to this day).  I was able to complete the exercise with enough time to spare for my two interviewers to ask me questions about my solution (which while it worked correctly, was not the most elegant).  The rest of the interviews were all questions, some behavioral/team/”fit”-related, but mostly technical.  All the developer hires that came in after me presumably went through a similar process.

Fast-forward to the fall of 2013–we’ve won a contract that more than doubles the amount of work we need to deliver.  The pressure is on to find, hire and onboard as many good developers as we can find.  An interview process that works just fine when you only hire a new developer every month or two scales poorly when you have a month or two to hire a dozen developers.  So we involve more developers in the interview process and cast a wide net for prospective hires.  After spending many man-hours interviewing candidates who struggle with our programming exercises, we find a few external candidates to hire–but far less than the dozen we need.  We end up grabbing people from other teams within the company to compensate.

So when our company changed the process again to involve developers in the phone screen process, I did some Googling to find out what sort of questions make an effective phone screen.  By far, the most useful post I’ve found is Steve Yegge’s Five Essential Phone-Screen Questions.  Reading (and re-reading) the whole thing is definitely worth your time.  Our recruiters only allot 30 minutes of time for our phone screens (and I usually have code to design & write, or bugs to fix), so my phone screen generally only covers 3 of Yegge’s 5 areas–coding, OO design and data structures.  In the coding area, instead of giving the candidates homework (or having them read the code over the phone), I started sharing a Google document with them and watching them write their answer to the coding question.  This is a great way to get a sense of how quickly a prospective developer hire can come up with a solution on the fly.  A more involved (and somewhat more buggy) approach, is to use the .NET Fiddle online console along with its collaboration feature.  If it doesn’t crash on you during the interview, you’ll be able to see if the solution compiles and runs successfully on the spot.  Thirty minutes has proven to be enough to get in a coding exercise and enough questions about OO design and data structures to have a good feel for whether or not it would be worthwhile to move someone on to the in-person interview phase of our process.  Since in-person interviews are generally conducted in pairs, each 30-minute phone screen that properly rejects a candidate saves 2-4 man-hours of additional interview time.

If there is any revision I would make to the current interview process, it would be to push our simpler questions into the candidate “homework” idea Yegge mentions early in his post.  Then we could preserve our 30 minutes of phone screen time for candidates who we already know have handled our easiest exercises.