Automating Indeed’s Release Process

Indeed’s rapid growth has presented us with many challenges, especially to our release process. Our largely manual process did not scale and became a bottleneck. We decided to develop a custom solution. The lessons we learned in automating our process can be applied to any rapidly growing organization that wants to maintain software quality and developer goodwill.

How did we end up here?

Our software release process has four main goals:

  • Understand which features are being released
  • Understand cross-product and cross-team dependencies
  • Quickly fix bugs in release candidates
  • Record release details for tracking, analysis, and repeatability

Our process ended up looking like this:

This process was comprehensive but required a lot of work. To put it in perspective, a software release with 4 new features required over 100 clicks and Git actions. Each new feature added about 13 actions to the process.

We identified four primary problems:

  • Release management took a lot of time.
  • It was hard to understand what exactly was in a release.
  • There was a lot of potential for error through so many manual steps.
  • Only senior engineers knew enough to handle a release.

We came to a realization: we needed more automation.

But wait — why not just simplify?

Of course, rather than automating our process, we could just simplify it. However, our process provided secondary benefits that we did not want to lose:

Data. Our process provided us with a lot of data and metrics, which allowed us to make continual improvements.

History. Our process allowed us to keep track of what was released and when it was released.

Transparency. Our process, while complicated, allowed us to examine each step.

Automating our way out

We realized that we could automate much of our process and reduce our overhead. To do so, we would need to integrate better with the solutions we already had in place — and be smart about it.

Our process uses multiple systems:

  • Atlassian JIRA: issue management and tracking
  • Atlassian Crucible: code reviews
  • Jenkins: release candidate builds and deploys
  • Gitlab: source control
  • Various build and dependency management tools

Rather than replace these tools, we decided to create a unified release system that could communicate with each of them. We called this unified release system Control Tower.

Integration with dependency management tools allows release managers (RMs) to track new code coming in through library updates. RMs can quickly assess code interdependencies and see the progress of changes in a release. Finally, when an RM has checked everything, they can trigger a build through Jenkins.

The Control Tower main view allows RMs to see details from all the relevant systems. Changes are organized by JIRA issue key, and each change item includes links to Crucible code review information and Git repo locations.

By automating, we significantly reduced the amount of human interaction necessary in our release process. In the following image, every grey box represents a manual step that was eliminated.

After automating, we reduced the number of required clicks and Git actions from over 100 to fewer than 15. And new features now add no extra work, instead of requiring 13 extra actions.

To learn even more about Control Tower, see our Indeed Engineering tech talk. We talk about Control Tower starting at 32:45.

Lessons learned

In the process of creating our unified release system, we learned some valuable lessons.

Lesson 1: Automate the process you have, not the one you want

When we first set out to automate our release process, we did what engineers naturally do in such a situation — we studied the process to understand it as best as we could before starting. Then, we did what engineers also naturally do — we tried to improve it.

While it seemed obvious to “fix” the process while we were automating it, we learned that a tested, working process — even one with problems — is preferable to an untested one, no matter how slick. Our initial attempts at automation met with resistance because developers were unfamiliar with the new way.

Lesson 2: Automation can mean more than you think

When most people think of “automating” a process, they assume it means removing decisions from human actors — “set it and forget it.” But sometimes you can’t remove human interaction from a process. It might be too difficult technically, or you might want a human eye on a process to assure a correct outcome. Even in these situations, automation can come into play.

Sometimes automation means collecting and displaying data to help humans make decisions faster. We found that, even when we needed a human to make a choice, we were able to provide better data to help them make a more informed choice.

Deciding on the proper balance between human and machine action is key to automating. We see future opportunities for improvement by applying machine learning techniques to help humans make decisions even faster.

Lesson 3: Transparency, transparency, transparency

Engineers might not like inefficiency, but they also don’t like mystery. We wanted to avoid a “black box” process that does everything without giving insight as to how and why.

We provide abundant transparency through logging and messaging whenever we can. Allowing developers to examine what the process had done — and why — helped them to trust and adopt the automation solution. Logging also helps should anything go wrong.

Where do we go from here?

Even with our new system in place, we know that we can improve it. We are already working behind-the-scenes on the next steps.

We are developing algorithms that can monitor issue statuses, completed code reviews, build/test statuses, and other external factors. We can develop systems capable of programmatically understanding when a feature is ready for release. We can then automatically make the proper merge requests and set the release process in motion. This further reduces the time between creating and shipping a feature.

We can use machine learning techniques to take in vast amounts of data for use in our decision-making process. This can point out risky deploys and let us know if we need to spend extra effort testing or if we can deploy with minimal oversight.

Our release management system is an important step toward increasing our software output while maintaining the quality our customers expect. This system is a step, not the final goal. By continually improving our process, by learning as we go, we work toward our ultimate goal — helping even more people get jobs.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Share on RedditEmail this to someone

Gracefully Degrading Functionality Using Status

In a previous blog post, we described how to use our Status library to create a robust health check for your applications. In this follow-up, we show how you can check and degrade your application during an outage by:

  • short-circuiting code paths of your application
  • removing a single application instance from a data center load balancer
  • removing an entire data center from rotation at the DNS level

Evaluating application health

The Status library allows you to perform two different types of checks on a system — a single dependency check and a system-wide evaluation. A dependency is a system or service that your system requires in order to function.

During a single dependency check, the DependencyManager uses an evaluate method that takes the dependency’s ID and returns a CheckResult.

A CheckResult includes:

  • the health of the dependency
  • some basic information about the dependency
  • the time it took to evaluate the health of the dependency

A CheckResult is a Java enum that is one of OK, MINOR, MAJOR, or OUTAGE. The OUTAGE status indicates that the dependency is not usable.

final CheckResult checkResult = dependencyManager.evaluate("dependencyId");
final CheckStatus status = checkResult.getStatus();

The second approach to evaluating an application’s health is to look at the system as a whole. This gives you a high-level overview of how the entire system is performing. When a system is in OUTAGE, this indicates that the instance of an application is not usable.

final CheckResultSet checkResultSet = dependencyManager.evaluate();
final CheckStatus systemStatus = checkResultSet.getSystemStatus();

If a system is unhealthy, it’s often best to short circuit requests made to the system and return an HTTP status code 500 (“Internal Server Error”). In the example below, we use an interceptor in Spring to capture the request, evaluate the system’s health, and respond with an error in the event that the application is in an outage.

public class SystemHealthInterceptor extends HandlerInterceptorAdapter {
    private final DependencyManager dependencyManager;

    @Override
    public boolean preHandle(
            final HttpServletRequest request,
            final HttpServletResponse response,
            final Object handler
    ) throws Exception {
        final CheckResultSet checkResultSet = dependencyManager.evaluate();
        final CheckStatus systemStatus = checkResultSet.getSystemStatus();
        
        switch (systemStatus) {
            case OUTAGE:
                response.setStatus(HttpStatus.INTERNAL_SERVER_ERROR.value());
                return false;
            default:
                break;
        }

        return true;
    }
}

Comparing the health of dependencies

CheckResultSet and CheckResult have methods for returning the current status of the system or the dependency, respectively. Once you have CheckStatus, there are a couple of methods that allow you to compare the results.

isBetterThan() determines if the current status is better than the provided status. This is an exclusive comparison.

CheckStatus.OK.isBetterThan(CheckStatus.OK)              // evaluates to false
CheckStatus.OK.isBetterThan(/* any other CheckStatus */) // evaluates to true

isWorseThan() determines if the current status is worse than the provided status. Again, this operation is exclusive.

CheckStatus.OUTAGE.isWorseThan(CheckStatus.OUTAGE)          // evaluates to false
CheckStatus.OUTAGE.isWorseThan(/* any other CheckStatus */) // evaluates to true

The isBetterThan() and isWorseThan() methods are great tools to check for a desired state of an evaluated dependency. Unfortunately, these methods do not offer enough control to produce a graceful degradation. Either the system was healthy, or it was in an outage. To better control the graceful degradation of our system, two additional methods were needed.

noBetterThan() returns the unhealthier of the two statuses.

CheckStatus.MINOR.noBetterThan(CheckStatus.MAJOR) // returns CheckStatus.MAJOR
CheckStatus.MINOR.noBetterThan(CheckStatus.OK)    // returns CheckStatus.MINOR

noWorseThan() returns the healthier of the two statuses.

CheckStatus.MINOR.noWorseThan(CheckStatus.MAJOR) // returns CheckStatus.MINOR
CheckStatus.MINOR.noWorseThan(CheckStatus.OK)    // returns CheckStatus.OK

During the complete system evaluation, we use a combination of these methods and the Urgency#downgradeWith() methods to gracefully degrade our application’s health.

By having the ability to inspect the outage state, engineers can dynamically toggle feature visibility based on the health of its corresponding dependency. Suppose that our service that provides company information was unable to reach its database. The service’s health check would change its state to MAJOR or OUTAGE. Our job search product would then omit the company widget from the right rail on the search results page. The core functionality that helps people find jobs would be unaffected.

Healthy

Unhealthy (Gracefully)

Status offers more than just the ability to control features based on a service’s health. We also use it to control access to instances of our front end web applications. When an instance is unable to service requests, we remove it from the load balancer until it is healthy again.

Instance level failovers

Generally, running multiple instances of your application in production is highly recommended. This helps keep your system resilient by allowing it to continue to handle requests even if a single instance of your application crashes. These instances of your application can live on a single machine, multiple machines, and even in multiple data centers.

The Status library lets you configure your load balancer to remove an instance if it becomes unhealthy. Consider the following basic example within a single data center.

  When all of the applications within a single data center are healthy, the load balancer distributes requests among them evenly. To determine if an application is healthy, the load balancer sends a request to the health check endpoint and evaluates the response code.
When an instance becomes unhealthy, the health check endpoint returns a non-200 status code, indicating that it should no longer receive traffic. The load balancer then removes the unhealthy instance from rotation, preventing it from receiving requests.
When instance 1 is removed from rotation, the other instances within a data center start to receive instance 1’s traffic. Within each data center, we provision enough instances so that we can handle traffic even if some of the instances go down.

Data center level failovers

Before a request is even sent to a data center, our domain (e.g. www.indeed.com) is resolved to an IP address using DNS. We use Global Server Load Balancer (GSLB) that allows us to geographically distribute traffic across our data centers. After the GSLB resolves the domain to the IP address of the nearest available data center, the data center load balancer then routes and fails over traffic as described above.

What if an entire data center can no longer service requests? Similar to the single instance approach, GSLB constantly checks each of our data centers for their health (using the same health check endpoint). When GSLB detects that a single data center can no longer service requests, it fails requests over to another data center and removes the unhealthy data center from rotation. Again, this helps keep the site available by ensuring that requests can be processed, even during an outage.

As long as a single data center remains healthy, the site can continue to service requests. For users that hit unhealthy data centers, this just looks like a slower web page load. While not ideal, the experience is better than an unprocessed request.

The last scenario is a complete system outage. This occurs when every data center becomes unhealthy and can no longer service requests. Engineers try to avoid this situation like the plague.

When Indeed encounters complete system outages, we reroute traffic to every data center and every instance. This policy, known as “failing open,” allows for graceful degradation of our system. While every instance may report an unhealthy state, it is possible that an application can perform some work. And being able to perform some work is better than performing no work.

Status works for Indeed and can work for you

The Status library is an integral part of the systems that we develop and run at Indeed. We use Status to:

  • quickly fail over application instances and data centers
  • detect when a deploy is going to fail before the code reaches a high traffic data center
  • keep our applications fast by failing requests quickly, rather than doing work we know will fail
  • keep our sites available by ensuring that only healthy instances of our applications service requests

To get started with Status, read our quick start guide and take a look at the samples. If you need help, you can reach out to us on GitHub or Twitter.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Share on RedditEmail this to someone

New Eng Manager at Indeed? First: Write Some Code

I joined Indeed in March 2016 as an “industry hire” manager for software engineers. At Indeed, engineering managers act as individual contributors (ICs) before taking on more responsibilities. Working with my team as an IC prepared me to be a more effective manager.

new eng manager

Before my first day, I talked with a few engineering managers about what to expect. They advised that I would spend about 3-6 months contributing as an individual developer. I would write unit tests and code, commit changes, do code reviews, fix bugs, write documentation, and more.

I was excited to hear about this approach, because in my recent years as an engineering manager, I had grudgingly stopped contributing at the code level. Instead, I lived vicariously through others by doing code reviews, participating in technical design reviews, and creating utilities and tools that boosted team productivity.

When new managers start in the Indeed engineering organization as an IC, they can rotate through several different teams or stay with a single team for about a quarter. I was in the latter camp and joined a team that works on revenue management.

Onboarding as an individual contributor

My manager helped to onboard me and directed me to self-guided coursework on our wiki. I was impressed by the amount of content provided to familiarize new hires with the tools and technologies we use at Indeed. In my experience, most companies don’t invest enough in creating and maintaining useful documentation. Equally as valuable, fellow Indeedians gladly answered my questions and helped me to get unblocked when I encountered technical hurdles. I really appreciated that support as a new employee.

During my time as an IC, I had no management responsibilities. That was a change for me….and it was wonderful! I focused on code. I built technical competence and knocked the rust off mental processes that I hadn’t needed to use for awhile. I observed practices and processes used by the team to learn how I could become equally productive. I had a chance to dive deeper into Git usage. I wrote unit and DAO tests to increase code coverage. I learned how to deploy code into the production environment. For the first time in a long while, I wrote production code for new features in a product.

To accelerate my exposure to the 20 different projects owned by my team, I asked to be included on every code review. I knew I wouldn’t be able to contribute to all of the projects, but I wanted to be exposed to as many as possible. Prior to my request, the developer typically selected a few people to do a code review and nominated one to be the “primary” reviewer. Because I was included in every review, I saw code changes and the comments left by team members on how to improve the code. I won’t claim I understood everything I read in every code review, but I did gain an appreciation for the types of changes. I recommend this approach to every new member of a team, not just managers.

Other activities helped me integrate with people outside of my team. For example, I scheduled lunch meetings with everyone who had interviewed me. This was mostly other engineering managers, but I also met with folks in program management and technical writing. Everyone I contacted was open to meeting me. These lunch meetings allowed me to get a feel for different roles; how they planned and prioritized work; their thoughts on going from IC to manager; and challenges that they had faced during their tenure at Indeed. On-site lunches (with great food, by the way) allowed me to meet both engineering veterans as well as people in other departments.

Transitioning into a managerial role

By the time I was close to the end of my first full quarter, I had contributed to several projects. I had been exposed to some of the important systems owned by my team. Around this time, my manager and I discussed my transition into a managerial role. We agreed that I had established enough of a foundation to build on. I took over 1-on-1 meetings, quarterly reviews, team meetings, and career growth discussions.

Maintaining a technical focus

Many software engineers who take on management roles struggle with the idea of giving up writing code. But in a leadership position, what matters more is engaging the team on a technical level. This engagement can take a variety of forms. Engineering managers at Indeed coach their teams on abstract skills and technical decisions. When managers have a deeper understanding of the technology, they can be more effective in their role.

I am glad that I had a chance to start as an IC so that I could earn my team’s trust and respect. A special shout out to the members of the Money team: Akbar, Ben, Cheng, Erica, Kevin, Li, and Richard.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Share on RedditEmail this to someone