Coming Together to Support the Open Source Community

Indeed Open Source Program logo

Over the last few weeks, conference and event cancellations around the world have heavily impacted the open source community. These events play an important role in the ecosystem that supports free and open source software. If we want that ecosystem to remain healthy, it is important for us to act now.

Why support matters now more than ever

Conferences and events provide essential opportunities for the open source community to:

  • Coordinate activities
  • Raise funds
  • Grow their user bases
  • Support each other
  • Evangelize their technologies
  • Educate and onboard new contributors
  • Ship new releases
  • Share knowledge

Running these events requires a lot of time, effort, and money. When they are cancelled, the event organizers bear the brunt of the losses. If we want these events to continue—and the open source community to sustain itself and grow—all users of free and open source software must respond.

The FOSS Responders

Open source community leaders from across the industry have come together to form a working group called FOSS Responders. We’re focused on identifying open source events, communities, foundations, and community members who are most in need of support. We also want to support individuals who are unable to absorb conference-related cancellation fees. Our goals: amplify these community needs and mobilize organizational and individual resources to help.

This working group of committed industry professionals includes participants from Indeed, GitLab, Open Collective, the Sustain community, the Drupal Association, and several other organizations. You can find more information about this working group—including information on how to join and participate—at fossresponders.com.

Virtual funding event

Indeed and Open Collective are collaborating to host a virtual funding event on May 22, 2020. We want to raise funds for conference organizers that have suffered irrecoverable losses due to event cancellations. We are calling on our peers in the industry to join us at this event as fellow FOSS Funders.

By coming together to share knowledge, collaborate on decision making, and coordinate our collective response, we can ensure that these events will continue to serve and support the community in the months and years to come. Find more information about the virtual funding event.

Taking action

Regardless of your need or your capacity to help, the time to act is now. Here are some specific actions you can take.

How to help

Now is the time to give back to the projects we depend on. This is how we future-proof our open source infrastructure investments and help millions who built the software we benefit from:

  • Donate and buy membership to foundations that support your projects
  • Donate to individuals working on projects you use via GitHub Sponsors and Open Collective
  • Donate to the FOSS Responders Open Collective—your funds will be used to help individuals who might otherwise fall through the cracks

How to get help

FOSS Responders will amplify your need so others can easily see who and how to help. By sharing your need, we can connect you with helpers.

  • If you are an individual who needs help paying for conference-related cancellation fees, fill out the FOSS Responders Individual Request
  • If you had to cancel an event and your organization needs financial assistance as a result, open an EVENT issue
  • If you need other kinds of help, open an ORGANIZE issue

How to get involved

The FOSS Responders working group is growing quickly and we could use your help.


Supporting the Open Source Community—cross-posted on Medium.

Improving Incident Retrospectives

This post was originally published on learningfromincidents.io.

Photo by Jared Erondu on Unsplash

As a Site Reliability Engineer (SRE) at Indeed, I often participate in the retrospective process that follows an incident. Retrospectives—in use at Indeed since late 2015—are a meaningful part of our engineering culture. I have never questioned their importance, but recently I was struck by shortcomings I saw in some retrospectives. For example:

  • A retrospective meeting might use only ~30% of the allotted time.
  • What is discussed might be gleaned from reading the incident ticket and retrospective document instead of attending the meeting.
  • Too much focus is devoted to the conditions that “triggered” the incident.
  • Signals used for deciding to hold a retrospective tend to direct focus toward incidents with high impact or high visibility.
  • Were participants actually learning anything new? It became apparent to me that we were not using every incident to realize our full potential to learn.

I decided to explore why so that we could improve our process.

The typical retrospective

Retrospectives at Indeed are usually a one-hour discussion including up to several dozen participants. The meeting is open to anyone in the company, but usually participants have either been involved in the incident response or have a stake in the outcome.

Facilitators follow a prescribed process:

  1. Review the timeline.
  2. Review the remediation items in the template.
  3. Find owners for the remediation items.
  4. Open the room for questions.

Spotting opportunities for improvement

In Summer 2018 I visited one of our tech sites and was invited to several local retrospective meetings to discuss some recent incidents. As an SRE it wasn’t unusual for me (or members of my team) to be invited. I also had subject matter expertise in a technology related to the incidents.

The facilitators took about 5 minutes to review the timeline, spent 8-10 minutes reviewing the remediation items, and concluded with questions related to the specific technologies involved in the causal chain. I didn’t learn anything new. I could have gained the same information from reading the incident ticket and retrospective document. This was a rare opportunity when a unique and eager group of people gathered in a conference room ready to collaboratively investigate. Instead, we never achieved the full potential.

This result is not uniform across retrospectives. I’ve been present in retrospectives where the participants offered such rich detail that the conversation continued well beyond the one-hour time limit, culminating with a huddle outside of the conference room.

The facilitators for these particular retrospective meetings followed the process faithfully but had only utilized ~30% of the time. It was clear to me that the retrospective process itself needed improvement.

Nurturing a safety culture

To understand potential changes, I first solicited viewpoints on why we conduct retrospectives at Indeed. Reasons I heard are likely familiar to most software organizations:

  • Find out what caused the outage
  • Measure the impact
  • Ensure that the outage never happens again
  • Create remediation items and assign owners

These goals also reflect Indeed’s strong sense of ownership. When someone’s service is involved in an incident, there’s a concern that we were closer to the edge of failure than we thought we were. Priorities temporarily change and people are more willing to critically examine process and design choices.

It’s important to use these opportunities to direct efforts toward a deeper analysis into our systems (both people and technical) and the assumptions that we’ve made about them. These approaches to a different safety culture at Indeed are still relatively new and are evolving toward widespread adoption.

Recommendation: Decouple remediation from the retrospective process

One process change I recommend is around the creation of remediation items. The retrospective process is not necessary as a forcing function for driving accountability of finding and owning remediation items.

I consistently observe that the creation of remediation items occurs organically after Production is stabilized. Many fixes are obvious to teams in the hindsight following an incident.

I see value in decoupling these “after action” activities from the retrospective process for many reasons.

  • The search for remediation items is often a tacit stopping point that halts further or deeper investigation.
  • The accountability around owning remediation items should be tightly coupled to incident ownership.
  • The retrospective process should be an optional activity. By making the retrospective process optional, teams that decide to engage in it are doing so because they see value in it rather than as an obligation or a checklist item.
  • Participants are freed up to conduct a deeper investigation unencumbered by the search for remediation items and shallow explanations.

Recommendation: Lighten up the retrospective template

Another useful change is with the retrospective template itself.

Using retrospective templates can be a lot like filling out forms. The focus is directed toward completion of an activity rather than free exposition. A blank document encourages a different kind of sharing. I have witnessed incidents where responders were so motivated to share their thoughts and descriptions that they produced rich and detailed analysis simply by starting with a blank document.

If every incident is shaped like a snowflake, it’s impossible to develop a template that is equipped to capture its unique characteristics. A template constrains detail and triggers explanations through close-ended questions. A blank canvas is open-ended. A template is yet another tacit stopping point that hinders deeper analysis. I recommend that we apply templates to incident analysis, but that we use blank documents for the retrospective process.

Driving organizational change

I have learned a lot by working to drive change at Indeed as we’ve grown quickly. My efforts have benefitted from my tenure in the company, experience participating in hundreds of incidents, and connection to the literature. I have made headway but there is still a lot to do.

I attribute some of my progress so far to finding other advocates in the company and remembering to communicate.

Find advocates

Advocates are colleagues who align closely with my goals, acknowledge where we could be doing better, and share a vision of what could be. I had no trouble finding these advocates. They are colleagues who are willing to listen, have an open mind and have the patience to consider another perspective. I held numerous 1:1s with leaders and stakeholders across the organization. I found opportunities to bring these topics up during meetings. I gave tech talks and reached out to potential advocates whenever I visited one of our global Engineering offices.

Over-communicate

As much as I might think that I was communicating what I was working on, it was never enough. I found I had to constantly over-communicate. As I over-communicated and leveraged multiple media, I may have sounded repetitive to anyone in close proximity to my words. But this was the only way to reach the far edges of the organization who might not have otherwise heard me. Not everybody has time to read every email or internal blog post.

Looking ahead

Response to these changes has been largely positive. The focus during retrospectives is still anchored to the technological factors, when more attention could be paid to the human factors. I’m exploring different avenues for increasing the reach and effectiveness of these efforts. This includes working with our instructional design team to create a debrief facilitator program, communicating more often and more broadly, making more process changes, continuing to help teams produce and share high quality write-ups, and focusing on producing educational opportunities. At this point we’ve only scratched the surface and I’m looking forward to what we will accomplish.


About the author

Alex Elman is a founding member of the Site Reliability Engineering team at Indeed. He leads two teams: one that focuses on Resilience Engineering and one that supports the flagship Job Search product. For the past eight years Alex has been helping Indeed adopt reliability practices to cope with ever increasing complexity and scale. Follow Alex on Twitter @_pkill.


Improving Incident Retrospectives—cross-posted on Medium.

D-Curve: An Improved Method for Defining Non-Contractual Churn with Type I and Type II Errors

Businesses need to know when customers end their business relationships, an act called “churn.” In a subscription business model, a customer churns by actively canceling their contract. The company can therefore detect and record this churn with absolute certainty. But when no explicit contract exists, churn is more passive and difficult to detect. Without any direct feedback from the customer, companies cannot determine whether the customer has lapsed temporarily or permanently.

Until now, detecting churn in such non-contractual relationships has been mostly arbitrary and more art than science.

Various analysts deal with the non-contractual churn conundrum in different ways. One popular approach is to assume the customer has churned if they lapse for a sufficiently long consecutive period of time. A problem with this approach, apart from it being guesswork, is that the chosen threshold for the length of the lapse period is often too high. This causes the business to wait too long to identify any churn problems. In Prediction of Advertiser Churn for Google Adwords, the authors are only able to measure churn after 12 months! Such a long wait period reduces the value of churn detection and the business’s ability to address problems. In analyses that estimate the churn period as a specified percentile of a distribution of buy cycles—time between successive customer purchases—choosing an optimal percentile (90th, 95th, 99th, etc) is difficult.

In this blog post, we present an improved scientific approach for defining non-contractual churn. Our approach avoids the struggle of choosing an optimal percentile by minimizing a well-defined objective function of type I and II errors.

Theory

Churn period (d) is the minimum length of consecutive silent (no transaction) periods beyond which a customer is considered to have ended their business relationship. Companies commonly partition a book of business into active and churned customers. Where customer relationships are non-contractual, any specified d will have associated type I & II errors. Therefore we should choose a definition that minimizes an objective function of these errors. In our approach, we specify the function to be a weighted average of the errors.

where:

  • e1(d) is the expected type I error associated with churn definition d; Type I error is labeling the customer as churned when they are active;
  • e2(d) is the expected type II error associated with churn definition d; Type II error is labeling the customer as active when they have churned;
  • w is the weight the analyst places on type I errors relative to type II errors; it can be interpreted as the relative costs of the errors.

The optimal churn definition, denote d*, therefore minimizes F(d). We call F(d) the d-curve.

To compute the error functions, e1(d) and e2(d), we need to introduce another set of notation:

  • ci represents the true churn status of customer i, 0=Active, 1=Churned;
  • li represents the number of consecutive periods customer i has lapsed.

With the above definitions, e1(d) and e2(d) are derived as follows.

From (2) and (3), we see that e1(d) is the overall proportion of active customers mislabeled as churned. Similarly, e2(d) is the overall proportion of churned customers mislabeled as active.

Implementing the theory

Suppose you have data that has recorded the periods associated with all customer transactions from time S to T.

To determine the optimal churn definition, complete the following experiment:

  1. Specify the minimum number of periods, D, beyond which you are almost sure that the customer has truly churned. You can do this by empirically examining distributions of customer buy cycles (the difference in periods between successive customer transaction dates) and choosing a sufficiently high percentile. We’ll call D the validation period. This means that the subjects of the experiment have to be limited to the subset of customers who have at least one transaction prior to T-D; else we cannot calculate the customer’s true churn status, ci. Also, the length of the entire data (T-S) should be long enough to allow you to evaluate the selected domain of churn definitions for the d-curve, F(d). For example, if the domain is {d:d<K+1), then T-S must exceed K+D.
  2. If you are only interested in voluntary churn, remove all customers otherwise terminated involuntarily by the company.
  3. For each customer i, determine the last purchase period as of time T:
    Calculate lapse period as of time T:
    And calculate the true churn status:
  4. For each customer i, calculate the last purchase period as of time T-D:Calculate lapse period as of time T-D:
  5. Select the domain of churn definitions, {d:dK}, on which you want to minimize F(d).
  6. For each churn definition in the selected domain, d =0, 1, 2…K, predict churn status for each customer as of time T-D, and measure the type I and II errors (e1(d) and e2(d)). Notice that e1(d) and e2(d) can be calculated from the data as follows:
  7. Select an appropriate weight, w.
  8. For d=0, 1, 2, …K, derive F(d) using (1).
  9. Choose the d that minimizes F(d) as your optimal d.

Results from real world application

We identified one of Indeed’s non-contractual products—job sponsorship—and applied both the percentile and d-curve methods to defining its churn period. We used monthly transaction data from September 2016 (S) through September 2019 (T).

Note that while the trends and insights we share are consistent with actual findings, we adjusted the actual results to protect the security of Indeed’s data.

Percentiles method

In this approach, we calculate the buy cycles for each customer. We can then represent each customer by a summary statistic (mean, median, and max) of their buy cycles. We then generate the distribution of the summary statistic across different customers:

Quantiles Mean Median Max
0 1 1 1
0.2 2 2 2
0.4 2 2 2
0.6 3.5 3 5
0.8 4.7 3 9
0.9 6.2 5 13
0.95 8 7 17
0.99 15 15 25
1 38 38 38
All figures illustrative

 

These results illustrate the analytic dilemma associated with the percentiles method. The distribution varies by the choice of summary statistic. Even with a given summary statistic, it’s not clear which percentile (90th, 95th or 99th) is optimal. Apart from that, any reasonable choice of percentile results in unnecessarily high churn definitions. For example, the 95th percentile of the distribution of mean buy-cycles is 8 months, while that of maximum buy-cycles is 17 months! And we will see in the next approach that while such longer definitions have lower type I errors, they have higher type II errors.

The d-curve approach deals with all of these problems by choosing the churn definition with the minimum weighted sum of the type I and II errors.

D-curve approach

We parameterized our model as follows:

  • w=0.5
  • D=12
  • S= 09-2016
  • K=12
  • T=09-2019
  • T-D=09-2018
Churn period Type I error (%) Type II error (%) Weighted error (%)
0 100.0 0.0 50.0
1 43.8 6.4 25.1
2 33.0 13.1 23.1
3 26.6 19.0 22.8
4 21.9 24.8 23.3
5 17.8 30.8 24.3
6 14.7 36.8 25.7
7 12.2 42.4 27.3
8 10.4 46.7 28.6
9 8.9 50.8 29.9
10 8.0 54.1 31.0
11 6.9 58.2 32.5
12 5.8 62.6 34.2

Using the d-curve, we choose 3 months as our optimal churn definition. A hypothesis test at 1% level of significance rejects the null hypothesis that the error for d=3 equals that of d=4.

More applications for the d-curve

We have formulated a framework for optimally selecting thresholds. While we apply the approach to define churn periods for non-contractual relationships, our approach has many other real world applications, chief of which is determining threshold probabilities in classification.

Acknowledgements

We are particularly grateful to Trey Causey, Ehsan Fakharizadi and Yaoyi Chen for their review and excellent feedback. We are, however, responsible for any mistakes in the post.


Cross-posted on Medium.