Another Tale of a Developer Interview Loop

There are literally millions of links on the internet about interviewing developers–interview questions, posts on why the way you do it is wrong, and even guides on making it through an interview process.  Here’s one more, about the interview process at my current employer.

Before I joined my current company (back in 2012), before I even set foot onsite, there was a phone screen.  At the time, there seemed to be one developer responsible for phone screening every prospective developer candidate for the entire company.  If memory serves, the phone screen lasted around 45 minutes.  The questions were challenging, but not impossible to answer.  When the in-person interviews were scheduled, I had no idea what I was in for.  Over the course of 5 hours, I spoke to 7 different people who had some role in 2 or 3 different projects or products within the company.  The first hour, they put a laptop with Visual Studio in front of me and asked me to write a console app that performed three different tasks (I won’t go into too much more detail as we still use the same exercise to this day).  I was able to complete the exercise with enough time to spare for my two interviewers to ask me questions about my solution (which while it worked correctly, was not the most elegant).  The rest of the interviews were all questions, some behavioral/team/”fit”-related, but mostly technical.  All the developer hires that came in after me presumably went through a similar process.

Fast-forward to the fall of 2013–we’ve won a contract that more than doubles the amount of work we need to deliver.  The pressure is on to find, hire and onboard as many good developers as we can find.  An interview process that works just fine when you only hire a new developer every month or two scales poorly when you have a month or two to hire a dozen developers.  So we involve more developers in the interview process and cast a wide net for prospective hires.  After spending many man-hours interviewing candidates who struggle with our programming exercises, we find a few external candidates to hire–but far less than the dozen we need.  We end up grabbing people from other teams within the company to compensate.

So when our company changed the process again to involve developers in the phone screen process, I did some Googling to find out what sort of questions make an effective phone screen.  By far, the most useful post I’ve found is Steve Yegge’s Five Essential Phone-Screen Questions.  Reading (and re-reading) the whole thing is definitely worth your time.  Our recruiters only allot 30 minutes of time for our phone screens (and I usually have code to design & write, or bugs to fix), so my phone screen generally only covers 3 of Yegge’s 5 areas–coding, OO design and data structures.  In the coding area, instead of giving the candidates homework (or having them read the code over the phone), I started sharing a Google document with them and watching them write their answer to the coding question.  This is a great way to get a sense of how quickly a prospective developer hire can come up with a solution on the fly.  A more involved (and somewhat more buggy) approach, is to use the .NET Fiddle online console along with its collaboration feature.  If it doesn’t crash on you during the interview, you’ll be able to see if the solution compiles and runs successfully on the spot.  Thirty minutes has proven to be enough to get in a coding exercise and enough questions about OO design and data structures to have a good feel for whether or not it would be worthwhile to move someone on to the in-person interview phase of our process.  Since in-person interviews are generally conducted in pairs, each 30-minute phone screen that properly rejects a candidate saves 2-4 man-hours of additional interview time.

If there is any revision I would make to the current interview process, it would be to push our simpler questions into the candidate “homework” idea Yegge mentions early in his post.  Then we could preserve our 30 minutes of phone screen time for candidates who we already know have handled our easiest exercises.

Farewell RockNUG!

Last week was the final monthly meeting of the Rockville .NET User Group (aka RockNUG) after a seven-year run. I greatly appreciate the leadership of Dean Fiala. It takes a lot of effort to find sponsors, meeting locations, and speakers consistently, and he always came through. Fortunately, the name and domain will live on for future use in special events (like another Robocode programming contest).

Being part of this group made an important impact on my career as a software developer in the DC metropolitan area. I learned a ton from the different keynote speakers over the years. The n00b talk portion of each monthly meeting gave me opportunities to present shorter talks of my own. In these, I learned a lot from the research needed to give a good presentation and from the audience who received it (through their questions and other information they volunteered). I’ve met a number of friends in the industry through this group, and even recruited one of them to join me at my current employer.

A lot has changed since RockNUG first started. For one thing, there are far more user groups now than there were 7 years ago. This means a lot more competition to find speakers. The other change has been in web development on the Microsoft stack–it requires fewer Microsoft-exclusive technologies today than in the past. The increasing popularity of web applications and the success of frameworks like Ruby on Rails, jQuery, node.js, knockout.js (as well as languages like JavaScript) has broadened what those of us working in Microsoft shops need to know in order to be successful. So very few of the talks over the past couple of years have had a .NET-specific focus. Finally, there is a lot of great learning material available on the web now. Between companies like Pluralsight, WintellectNOW, and conferences that post their keynote presentations online, there are a wealth of learning opportunities for developers that don’t even require them to leave their desk.

None of these online options can replace the in-person interaction, networking and opportunities to build friendships that a user group like RockNUG can provide. So even though RockNUG has come to an end, I still believe in user groups. I’ll be on the lookout for groups just like it (or perhaps even create one).

Managing Your Tech Career

Episode #980 of .NET Rocks was an excellent 52 minutes on career management for developers.  Since turning 40 this year, I’ve been thinking a lot more about my career and where I want to take it from here.  The entire episode is well-worth listening to, but I would distill the essence of the advice from the guest (John Sonmez) down to this: market yourself.

When I gave a talk to some software engineering students back in February, I encouraged them to start blogs, give presentations and talks, and start podcasts (so far I’ve only done the first two myself).  I suggested all of these things primarily as a way for them to improve their knowledge, but a higher profile on the internet is certainly a positive side-effect of doing those things.  One point I didn’t add (which Sonmez brings up in his interview) is that consistency is very important.  He recommends a blog post every week.  That’s a goal I’m striving to meet (though not always succeeding).

Another related point Sonmez made is that developers need to set aside regular time to manage their career.  The amount of time averaged something like an hour every two weeks.  Consistency is especially important here as well–if not mandatory, given how quickly technology advances.  I’ve recently started reading The Pragmatic Programmer, and it makes a similar point but uses investment terminology.  Section 5 of the first chapter (Your Knowledge Portfolio) make this point:

“Your knowledge and experience are your most important professional assets.  Unfortunately, they’re expiring assets.”

Knowledge about specific programming languages, databases, etc can age very poorly.  Failing to consistently add new assets to your knowledge portfolio, to diversify and balance those assets among various technologies (of varying maturities), and to “re-balance” that portfolio over time can result in obsolescence.  Given the prevalence of ageism/age discrimination  that already exists in information technology, having old or irrelevant skills is a quick way to end up at the margins of IT, working in companies that are yoked to technologies that will make it increasingly difficult for them to serve their business goals (much less to serve your goals of having a fulfilling technology career).

I saw this first-hand in an unexpected way when I attended South by Southwest in 2013.  One of the shuttle bus drivers I rode with regularly between my hotel and the various conference venues was actually doing it for income between short-term software development gigs all over the country.  He was an older gentleman whose skills (at least on the Microsoft stack) hadn’t advanced beyond VB6.  While there are still a ton of software systems built in VB6 (I certainly built my share of them in the late 1990s and early 2000s), his knowledge portfolio means that contract work maintaining VB6 code may be all that’s available to him.

In my own career, I’ve been working to broaden my own knowledge portfolio beyond the Microsoft stack.  Microsoft itself is doing some of this by adopting external technologies like JavaScript, jQuery, and knockout.js for web application development.  Angular.js is a framework strongly supported by Google that Microsoft has made sure plays very well with ASP.NET MVC.  So building my knowledge of JavaScript, and platforms like node.js are also goals for me in doing what I can to remain an attractive candidate for hire–whether as an employee, or for a future of self-employment.

Code Generation with LINQPad 4

Today I encountered a task at work that offered the prospect of some pretty dull development work–code that needed to be written that was almost the same in multiple cases (but not quite).  It seemed like work that could benefit from the use of T4 templates, but quickly became frustrated by the process of setting up and debugging a template.  The interleaving of angle bracket markup with code was never fun in XML, and T4 templates began to resemble that very quickly.

So after abandoning the T4 template approach, I fired up LINQPad to see if I could accomplish my goal in that.  As it turned out, writing a small C# program in LINQPad for code generation was a lot easier.  I just needed to remember two key things about string substitution in verbatim string literals.  Here they are:

  1. Curly brackets need to be escaped.  So “{” should be “{{” and “}” should be “}}”.  Not doing this will result in a FormatException.
  2. Double quotes need to be escaped.  So ” should be “”.

Beyond that, it was a matter of writing a code template inside a verbatim string literal (@”<code template goes here>”) with format items where needed ({0},{1},…).

I’ve made a code sample available as GitHub gist here.  So far, I’ve used this technique to generate nearly 20 files in a fraction of the time it would have taken to write them manually.  Very little manual tweaking of the files was needed after generation, which left more time to test the generated code in real scenarios.

Re-Introducing NuGet (and introducing Chocolatey)

Last month, I presented on the topics of NuGet and Chocolatey at RockNUG as the lead-in to David Makogon‘s Polyglot Persistence talk. Since the time I first gave a presentation on NuGet at a previous employer a couple years ago, the package manager has matured quite a bit. Because there was far more than 30 minutes worth of material to discuss, the rest of this post will cover material I didn’t get to, commentary from the audience, and the answer to a question about tags.

In discussing the term package manager, I indicated that it meant more than one thing:

  • automation of dependency management for operating systems (think Unix or Linux distributions)
  • automation of dependency management for programming languages (think Perl’s CPAN, Ruby Gems, Node.js npm)

NuGet is the second type.  I find Chocolatey quite interesting because it’s a package manager of the first type that leverages the capabilities of NuGet to accomplish its work.

NuGet enables us as developers to define and re-define what a third-party dependency is.  The team at Fortigent (one of RockNUG’s sponsors) has made packages out of some of the functionality they’ve developed internally.

There are a couple of different ways to create packages:

The second way is recommended for integration with build systems.  Typing “nuget spec” in the location of the csproj file you want to make a package out of will generate a “nuspec” file (in addition to automatically referencing other files as dependencies).  Edit this file in order to add tags, licensing, links and other information you want to make available in your package.  Typing “nuget pack” followed by the csproj file will generate a file with the .nupkg extension.  The extension merely hides the fact that it’s a ZIP file with some additional information.

In addition to creating packages, NuGet gives us the ability to set up our own package feeds.  The feed can be a simple as a network share with packages in it.  One step up from that is to create an empty ASP.NET Web application and add NuGet.Server to it.  This will add everything to the application needed to host your own packages (or others from third parties).  You can even publish your packages to this type of application if you wish.  The pinnacle of NuGet package distribution is to host your own fork of the NuGet Gallery (available on GitHub).  One software vendor, JetBrains, forked the NuGet Gallery to publish documentation on all the plug-ins available for the latest version of ReSharper as well as make it possible to download ReSharper itself.  Chocolatey uses the NuGet Gallery code in a similar way.  Unlike the ReSharper gallery (which doesn’t let you download plugins), the Chocolatey gallery does allow it (though the actual installs require command-line interaction, which is helpfully displayed next to each package).

One of the NuGet-related projects I found particularly interesting is concierge.nuget.org.  Its objective is to recommend NuGet packages in the same way we receive movie, music and product recommendations from Netflix, Spotify or Amazon.  Simply upload the packages.config file for your project and get recommendations back.  I learned about this (and other .NET development-related topics) on The Morning Brew.

Q & A

While there weren’t any questions at the end, there was one asked during the presentation about the “tags” element of the nuspec file inside each package.  When you look at a package in the NuGet Gallery (like EntityFramework for example), you see a list of linkable tags.  Clicking on one actually triggers a search for each package that shares a particular tag.  So if  you’re a package author who wants their package to be discovered more easily, putting the right keywords in the “tags” element will help.

Reducing Duplication with Interfaces, Generics and Abstracts

The parts of our application (a long-term service and support system for the state of Maryland) that follow the DRY principle best tend to start with a combination of generic interfaces inherited by an abstract class that implements common functionality.  The end result–specific implementations that consist solely of a constructor.  I was able to accomplish this as well in one of my more recent domain implementations.  I’ve created a sample (using fantasy football as a domain) to demonstrate the ideas in a way that may be applied to future designs.

Let’s take the idea of a team roster.  A roster consists of players with a wide variety of roles that can be grouped this way:

  • QBs
  • offensive linemen
  • skill position players
  • defensive linemen
  • linebackers
  • defensive backs
  • special teams

Since I want specific implementations that are (very) small, I’ll need to find common attributes between these different groups.  Each roster grouping above is just a list of players.  Things that are common to all players (regardless of position) include attributes like:

  • first name
  • last name
  • team name
  • position

The first three attributes are just strings of text, so I treat them as such in my implementation.  Position could be treated that way too, but instead I’ll implement an enumeration with all the positions and leave the implementation of it to position-specific classes I’ll create later.  Going back to the roster grouping idea as a list of players, we can use a generic interface implemented by an abstract class so that implementations of the groups above will differ only by constructor.  Now, when I implement a Quarterbacks group,  the only differences between it and the implementation of an OffensiveLinemen group are the class names and types.  The RosterGroup class contains all the important functionality, including the IEquatable implementation that enables comparison of groups.  I followed a ReSharper suggestion to make IRosterGroup explicitly covariant.

 

Book Review: Building Interactive Queries with LINQPad

Any new technical book has the challenge of adding value above and beyond what’s available for free on the web.  A new book on LINQPad has the additional challenge of adding value above and beyond the wealth of samples already included with LINQPad, including code samples from two LINQPad-enabled books.  So when I received my review copy of Building Interactive Queries with LINQPad, I was very curious to see what the author (Sebastien Finot) could accomplish in 126 pages.

Even as someone who has used LINQPad enough in the past few years to present on it on front of a .NET user group, I learned new things about the tool I hadn’t known before (such as the ability to interact with the console and CSS customization of the application’s look-and-feel).  The book might have been more accurately titled “Building Interactive Queries with LINQ and LINQPad”, as the book provided good examples of a wide variety for LINQs query operators.  Finot also mentioned the performance implications of ToList()–a very useful mention depending on the size of collection you might be dealing with in your queries.  All the code samples in the book are available for download as well.

The book missed some opportunities to add value for readers.  Fuller treatment of the NuGet dependency management capabilities in the paid versions of LINQPad would have been helpful in deciding if the feature was worth paying for.  Finot also mentioned the existence of LINQ to Twitter and LINQ to JSON APIs but didn’t link to the projects in the book.  More examples of using LINQ to parse and manipulate JSON (instead of XML) would have improved the book significantly, given the increased usage of JSON in .NET development these days.  Unfortunately, the code samples didn’t include databases, which would have enabled the author to go above and beyond the fairly standard Northwind database examples.  A custom OData feed for use in explaining the ability of LINQPad to query those data sources would have been a great help as well (given the rather tenuous availability of the sample services at odata.org).

Building Interactive Queries with LINQPad is the only book I’ve seen dealing specifically with LINQPad.  If you use LINQPad on a regular basis (or plan to), the e-book is worth purchasing.  For an in-depth treatment of LINQ, you’ll have to look elsewhere.

Disclosure: I received the e-book free of charge from the publisher for the purpose of providing this review.

Binding Redirects, StructureMap and Dependency Version Upgrades

Dealing with the fallout in failing unit tests from a code merge is one of the most frustrating tasks in software development.  And as one of a (very) small number of developers on our team that believes in unit testing, it fell to me to determine the cause of multiple instances of the structuremap exception code 207 error.

As it turned out, the culprit was a tactic I’ve used in the past to work with code that only works with a specific version of an assembly–the binding redirect.  When the same person is in charge of upgrading dependencies, this tends not to be an issue because if they’ve used binding redirects, they know it’s necessary to update them when dependencies are upgraded.    In this case, the dependencies were upgraded and the redirects were not.  As a result, StructureMap tried to find a specific version of an assembly that was no longer available and threw exception code 207 when it failed.

The App Store Economy Ain’t Broken (So Don’t Fix It)

I came across this article via Daring Fireball, and figured I’d post my two cents about it.  I disagree with the both the premise of the article and some of the specifics.

To the question of “why are so many of us so surprisingly cheap when browsing the virtual shelves of the App Store?” I’d say because quite a few vendors have conditioned us to expect high-quality apps for a fairly low price. It’s the same reason that the vast majority of people expect news to be free on the Internet.  Those news sources that went online with paywalls at the beginning (The Wall Street Journal and The Economist are two publications I read for example) are still doing just fine financially.  Those that didn’t are struggling financially (or going out of business altogether).

The idea that “we as cheap customers are having a negative impact on a lot of both real and potential businesses” is one I disagree with.  One, because the author doesn’t quantify the negative impact.  Two, because a potential business is a valueless unknown (and as such, can’t have any real weight in a discussion of what to pay for products from real companies).  I’ll certainly buy an app if I use it a lot (and/or get tired of seeing ads in the case of most games).  The benefit of the low pricing both to us as consumers and to app developers is that we can buy multiple apps that do similar things without having to think much about the cost (it’s why I own more than one photography app, for example).

I’m not a big fan of in-app purchases (especially after finding out how much my wife spent on a single game), but I don’t see much of a difference between that model and the licensing/subscription model that more and more software companies (Adobe, Microsoft) and others (Netflix, Hulu, Spotify, Pandora) are moving (or have already moved) to.  The author’s focus on social media apps and games leaves out more serious “service-backed” apps like Evernote, GitHub, Flickr, DropBox, Box, LinkedIn and Google Drive that let you use a limited set of functionality for free and pay more for additional features or storage space.

Companies who sell apps aren’t doing it for charity.  So if they’re any good at business at all, they’ll sell their products at a price that will keep them in business–or they’ll go out of business.  It isn’t our job as consumers to keep poorly run companies in business by buying their software.  And despite the author’s suggestion, paying for great apps now certainly doesn’t mean great apps later.

Replicating Folder Structures in New Environments with MSBuild

I recently received the task of modifying an existing MSBuild script to copy configuration files from one location to another while preserving all but the top levels of their original folder structure.  Completing this task required a refresher in MSBuild well-known metadata and task batching (among other things), so I’m recounting my process here for future reference.

The config files that needed copying were already collected into an item via a CreateItem task.  Since we’re using MSBuild 4.0 though, I replaced it with the simpler ItemGroup.  CreateItem has been deprecated for awhile, but can still be used.  There is a bit of debate over the precise differences between CreateItem and ItemGroup, but for me the bottom line is the same (or superior) functionality with less XML.

Creating a new folder on the fly is easy enough with the MakeDir task.  There’s no need to manually check whether or not the directory you’re trying to create already exists or not.  The task just works.

The trickiest part of  this task was figuring out what combination of well-known metadata needed to go in the DestinationFiles attribute of the Copy task to achieve the desired result.  The answer ended up looking like this:

<Copy SourceFiles="@(ConfigFiles)" DestinationFiles="$(OutDir)_Config\$(Environment)\%(ConfigFiles.RecursiveDir)%(ConfigFiles.Filename)%(ConfigFiles.Extension)" />

The key bit of metadata is the RecursiveDir part.  Since the ItemGroup that builds the file collection uses the ** wildcard, and it covered all the original folder structure I needed, putting after the new “root” destination and before the file names gave me the result I wanted.  Another reason that well-known metadata was vital to the task is that all the files have the same name (Web.config), so the easiest way to differentiate them for the purpose of copying was their location in the folder structure.

In addition to the links above, this book by Sayed Ibrahim Hashimi was very helpful.  In a previous job where configuration management was a much larger part of my role, I referred to it (and sedodream.com) on a daily basis.