k8dash: Indeed’s Open Source Kubernetes Dashboard

So you’ve got your first Kubernetes (also known as k8s) cluster up and running. Congratulations! Now, how do you operate the thing? Deployments, replica sets, stateful sets, pods, ingress, oh my! Getting Kubernetes running can feel like a big enough challenge in and of itself, but does day two of operations need to be just as much of a challenge?

Kubernetes is an amazing but complex system. The learning curve can be steep. Plus, the standard Kubernetes dashboard has limited features. Another option is kubectl, which is extremely powerful but also a power user tool. Even if you become a kubectl wizard, can you expect everyone in your organization to do the same? And with kubectl it’s difficult to gain visibility into the general health and performance of the entire cluster all at once.

Enter k8dash—pronounced Kate Dash (/kāt,daSH/)—Indeed’s open source Kubernetes dashboard.

k8dash dashboard

Since k8dash’s release in March of 2019, it’s received over 625 Github stars and been downloaded from DockerHub over 1 million times. k8dash is a key component of Kubernetes operations for many organizations.

In May of 2020, the Indeed Engineering organization adopted the k8dash project. We’re excited about the visibility this brings to the project.

Benefits of managing your Kubernetes cluster with k8dash

Here are a few of the benefits of k8dash. See more of k8dash in action here.

Quick installation

Low operational complexity is a core tenet of the k8dash project. As such, you can install k8dash quickly with a couple dozen lines of YAML. k8dash runs only a single service. No databases or caches are required. This extends to AuthN/AuthZ via OIDC. If you’re already using OIDC to secure your cluster, k8dash makes extending this to your dashboards easy: configure 2-3 environment variables and you’re up and running. No special authenticating proxies or other complicated configurations are required.

Cluster visualization and management

k8dash helps you understand the current status of all of your cluster’s moving parts: namespaces, nodes, pods, deployments. Real-time charts show poorly performing resources. The intuitive interface removes much of the behind-the-scenes complexity and helps flatten your Kubernetes learning curve.

You can manage your cluster components via the dashboard and leverage k8dash’s YAML editor to edit resources. k8dash uses the Kubernetes API and provides context-aware API docs. With k8dash you can view pod logs and even SSH directly into a running pod via a terminal directly in your browser, with a single click.

k8dash also integrates with Metrics Server, letting you visualize CPU/RAM usage. This visualization helps you understand how well your services are running. As Kubernetes simplifies the complexity of running hundreds or even thousands of microservices across an abstract compute pool, it brings the promise of improved resource utilization through bin packing. However, for many organizations this promise goes unrealized because it can be difficult to know which services are over- or under-provisioned. k8dash’s intuitive UI takes the guesswork out of understanding how well services are provisioned.

Real-time dashboard

Because k8dash is a real-time dashboard, you don’t need to refresh pages to see the current state of your cluster. Instead, you can watch charts, graphs, and tables update in real time as you roll out a deployment. You can watch as nodes are added to and removed from your cluster, and know as soon as new nodes are fully available. Or, simply monitor a stream of Kubernetes cluster-wide events as they happen.

Because k8dash is mobile optimized, you can monitor—and even modify—your cluster on the go. If you’re getting paged about a troublesome pod just as your movie is about to start, with k8dash you can restart the pod directly from your phone!

The k8dash project: How to contribute

k8dash is made up of a lightweight server and a client, and we’re always looking for core contributors.

The server—built in Node.js and weighing in at ~150 LOC—is predominantly a proxy for the front end to the Kubernetes API server. The k8dash server makes heavy use of the following npm packages:

  • express (web server)
  • http-proxy-middleware (proxies requests to Kubernetes API)
  • @kubernetes/client-node (official Kubernetes npm module. Used to discover Kubernetes API location)
  • openid-client (fantastic npm module for OIDC)

The client is a React.js application (using create-react-app) with minimal additional dependencies.

If you would like to contribute, see the list of issues in GitHub.


About the author

Eric Herbrandson is a staff software engineer and member of the site reliability engineering team at Indeed. Eric has used orchestration frameworks, including ECS, Heroku, Docker Swarm, Hashicorp’s Nomad, DCOS (Marathon/Mesos), and Kubernetes. While finding Kubernetes to be the clear winner in the orchestration space, Eric recognized that existing visualization options were lacking compared to other frameworks. In an effort to better understand the Kubernetes API and to create a solution that contained all of the features he needed, Eric developed k8dash over a three-week period.

Cross-posted on Medium.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on RedditEmail this to someone

Jackson: A Growing User Base Presents New Challenges

Jackson is a mature and feature-rich open source project that we use, support, and contribute to here at Indeed. In my previous post, I introduced Jackson’s core competency as a JSON library for Java. I went on to describe the additional data formats, data types, and JVM languages Jackson supports. In this post, I will present challenges resulting from Jackson’s expansion, in my view, as Jackson’s creator and primary maintainer. I’ll also share our plans to address these challenges as a community.

The other side of Cape Flattery by Tatu Saloranta

The other side of Cape Flattery by Tatu Saloranta

Growing pains

Over its 13 years, the Jackson project added a lot of new functionality with the help of a large community of contributors. Users report issues and developers contribute fixes, new features, and even new modules. For example, the Java 8 date/time datatype module—which supports Java 8 date/time types, java.time.*, specified in JSR-310—was developed concurrently with the specification itself by a member of the community.

Because of its ever-expanding functionality and user base, Jackson is experiencing growing pains and could use help with:

  • Improving the documentation
  • Enhancing the project’s website and branding
  • Managing and prioritizing large-scale changes and features
  • Refining the details of reported issues
  • Testing the compatibility of new Jackson versions against  downstream dependencies

Documentation’s structure and organization

The availability, accessibility, and freshness of documentation is always a challenge for widely used libraries and frameworks. Jackson is no exception. It may even suffer from more documentation debt than other libraries with a comparable user base.

The problem is not so much a lack of documentation per se. A lot of content exists already:

3rd-party tutorials
Jackson project repos
StackOverflow

Jackson also has a Javadoc reference that documents most classes of interest. However, Jackson is a vast library. The Javadoc reference can seem overwhelming, especially to new users. And the Javadoc reference does not include usage examples.

Our first priority is creating How-To guides that walk library consumers through the most typical use cases, such as:

  • How to convert the f.ex JSON structure to match Java objects.
  • How to format other data types such as XML.
  • How to use Jackson within frameworks, such as Spring/Spring Boot, DropWizard, JAX-RS/Jersey, Lombok, and Immutables.

We also want to improve the Javadoc reference. Some classes and methods have no descriptions, while descriptions for others are incomplete. In addition to the auto-generated Javadoc reference, we publish manually created wiki pages that detail the most used features and objects. We’d like to find ways to auto-update these wiki pages.

To implement these modifications, we need the following documentation structure, tooling, and process changes:

  • A super structure for adding inline content and links to external content
  • Access permissions for contributors to add and correct documentation
  • A documentation feedback mechanism to help focus improvement and maintenance efforts
  • Documentation templates to make it easier to add information

Project website and branding

Even though the project is 13 years old and widely used, the main project home page still has the stock GitHub project look. This repo contains a lot of helpful information about Jackson, but it doesn’t have a distinct style, brand, or logo.

For Hacktoberfest 2020, we created an issue for a Jackson logo design. Examples we like:

Managing large-scale changes

We appreciate the healthy stream of code contributions—mostly to fix reported issues, but for new features as well.

The most valuable type of contribution for the Jackson project has historically been user bug reports filed as GitHub issues. We get a dozen issues each week, leading to fixes and quality improvements. Without these reports, Jackson would not be half as good as it is. The new feature requests we receive regularly are another valuable source of improvements. New feature requests also help us prioritize new development work.

Better management of larger changes and features is an area for improvement. These efforts take a long time to execute and benefit from careful planning and collaboration. While it’s possible to handle bigger changes under a single issue, this can get unwieldy quickly.

The Jackson project could use help with setting the priority of new feature requests.

  • Which issues should we focus on improving?
  • Can we gauge the consensus of the whole community instead of relying on a small number of active and vocal participants? This helps to avoid “squeaky wheel problems” getting priority.
  • What’s the best way to obtain early feedback on API and implementation plans? Using existing mailing lists, issue trackers, and chat all have their challenges.

To address the need for more community feedback, I introduced the idea of “mini-specifications” called Jackson STrategic Enhancement Plan: JSTEP. Similar to KIPs in Kafka, JSTEPs are intended to foster collaboration. So far this has had limited success, partly due to the limitations of the tool used: GitHub wiki pages do not have the same feature set as GitHub issues or Google documents.

I also started keeping a personal but public todo or work-in-progress (WIP) list—Jackson WIP—as a GitHub wiki page. I wanted a low-maintenance mechanism for me to track and share my own near-term plans.

Refining reported issues, collaboratively

As mentioned above, bug reports have historically been the most useful type of feedback. But while useful, managing reported issues and new feature requests takes a lot of effort. This has become especially challenging now that the data formats, data types, JVM languages, and user base have grown.

Basic issue triage has become time consuming and includes the following steps:

  • Validating the reported issues.
  • Determining if the reported issues resulted from documentation that is missing or unclear.
  • Asking for more information.

The triage time takes away from the limited time we have to work on functionality and documentation. It can also slow down responses to issue reports, which in turn leads to a poor reporter experience.

We would like to find ways to train and include volunteer issue triagers. They can help with refining issues in a timely manner and improving user engagement by:

  • Adding labels
  • Asking questions to get more information
  • Adding findings of their own
  • Getting module and domain experts engaged where applicable

Starting with jackson-databind issues, we improved the workflow and refined GitHub issues by creating:

  • Issue templates to help users submit better issue reports
  • The “to-evaluate” label—set automatically for new issues
  • The “most-wanted” label—which indicates issues that have consistently come up on mailing lists or up-voted using the GitHub thumbs-up emoticon

If these changes are successful, we’ll propagate them to other project repos. Similarly, we can make other incremental changes along these lines.

Compatibility testing of new versions with dependencies

The unit test coverage in the Jackson project’s code is reasonable, and coverage for core components is quite good. There’s even a nascent effort to write more inter-component functional tests in the jackson-integration-tests repo. However, there is currently a sizable gap in testing the compatibility between the minor Jackson version that is under development and downstream dependencies, such as frameworks like Spring Boot.

One challenge is the difference between the development lifecycles of Jackson and the libraries and frameworks that use Jackson. Existing libraries and frameworks depend on the stable Jackson minor version and test everything only against that version. Version 2.11 is the current stable version. They report any issues they find. We may fix the issues in a patch release, like 2.11.1.

Simultaneously, the Jackson project is developing a new minor version, 2.12, which will be released and published as version 2.12.0. Working on versions 2.11.1 and 2.12 concurrently should not be an issue. We use the standard semantic versioning scheme for the Jackson public API, which should guard against creating new issues.

In practice, standard semantic versioning cannot protect against all actual issues. Semantic versioning is used for Jackson’s public API, but internal APIs across core components do not have the same level of backwards compatibility. While Jackson maintainers consider the compatibility between the published and documented API, users tend to rely on the API’s actual observed behavior rather than its intended behavior. Sometimes what maintainers consider bugs or unexpected behaviors, users consider the behavior an actual feature or working as expected.

That said, we can catch and address such issues during the development cycle of the new version if we test downstream systems using the under-development SNAPSHOT version of Jackson. At the time of writing this article, the current version of Spring Boot ships with a dependency on Jackson 2.11.2, the latest patch version. We can create an alternative set of Spring Boot unit and integration tests that instead rely on Jackson 2.12.0-SNAPSHOT. The snapshot version is published regularly, and the Jackson project provides automatic snapshot builds.

Such cross-project integration tests would benefit all involved projects. This prevents the release of some (or perhaps most) regression issues, reducing the need for patch versions. Over time it should also significantly improve compatibility across Jackson versions, closing the gap between expected and actual behavior of the API.

How to get involved

Jackson’s growing user base and features have presented us with documentation, branding, issue triaging, issue prioritization, and code testing challenges. I’ve shared some of the steps we’ve taken to address them. You can help us by implementing some of these solutions or suggesting alternates. Together we can lay a foundation for Jackson’s future growth. For contribution opportunities, visit Jackson Hacktoberfest 2020. To find out more, join us on Gitter chat or sign up for the jackson-dev mailing list.


About the author

Tatu Saloranta is a staff software engineer at Indeed, leading the team that is integrating the next-generation continuous deployment system. Outside of Indeed, he is best known for his open source activities. Under the handle @cowtowncoder, he has authored many popular open source Java libraries, such as Jackson, Woodstox, lzf compression codec, and Java ClassMate. For a complete list see FasterXML Org.

 

Cross-posted on Medium.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on RedditEmail this to someone

Jackson: More than JSON for Java

Jackson is a mature and feature-rich open source project that we use, support, and contribute to here at Indeed. As Jackson’s creator and primary maintainer, I want to highlight Jackson’s core competencies, extensions, and challenges in this two-part series.

Cape Flattery by Tatu Saloranta, Jackson's creator

Cape Flattery by Tatu Saloranta

Jackson’s core competency

If you’re creating a web service in Java that reads or returns JSON, you need a library to convert Java objects into JSON and vice versa. Since this functionality is not available in the Java standard library (JDK), you can use the Jackson library for its core competency: writing Java objects as JSON and reading JSON as Java objects. Later in this post, I introduce Jackson’s additional features. 

As a Java JSON library, Jackson is:

Jackson is downloaded 25 million times per month via Maven Central, and 16,000 projects depend on jackson-databind. It’s the second most widely used Java library, after Guava, according to the Core Infrastructure Initiative’s census results.

You can use Jackson directly, but nowadays users are more likely to encounter its functionality exposed by libraries and frameworks. These libraries and frameworks use Jackson by default for JSON handling to expose JSON requests and responses; or JSON configuration files as Java objects.

Some examples are:

Users of these frameworks, libraries, and clients may not even be aware that they utilize Jackson. We are more likely to learn this when troubleshooting usage issues or making changes to default handling.

Examples of Jackson usage

The following example shows how to annotate request and response models in the Spring Boot framework.

// Model of updated and accessed content
public class User {
  private Long id;
  private String name;
  @JsonProperty(“dob”) // customize name used in JSON
  private LocalDate dateOfBirth;
  private LocalDateTime lastLogin;
  // plus public Getters, Setters
}

// Service endpoint definition
@RestController
@RequestMapping("/users")
public class UserController {
  // ...
  @GetMapping("/{id}")
  public User findUserById(@PathVariable Long id) {
    return userStorage.findUserById(id);
  }
  @PostMapping(consumes = MediaType.APPLICATION_JSON_VALUE)
  @ResponseStatus(HttpStatus.CREATED)
  public Long createUser(@RequestBody User user) {
    return userStorage.createUser(user);
  }
}

In this example, the Spring Boot framework reads JSON as an instance of the User class. In the POST method, it passes a User instance to a storage handler. It also fetches the User instance via a storage handler for a given ID and writes serialized JSON of that instance. For a full explanation, see Jackson JSON Request and Response Mapping in Spring Boot.

You can also read and write JSON directly via the Jackson API, without any framework or library, as follows:

final ObjectMapper mapper = new ObjectMapper();
// Read JSON as a Java object:
User user;
try (final InputStream in = requestContext.getInputStream()) {
  user = mapper.readValue(in, User.class);
}
// Write a Java object as JSON
try (final OutputStream out = requestContext.getOutputStream()) {
  mapper.writeValue(out, user);
}

Jackson is more than JSON for Java

While Jackson’s core functionality originally started as JSON for Java, it quickly expanded through modules, which are pluggable extension components you can add to the Jackson core. These modules support features that the core does not handle by default. Jackson also has extensions for other JVM languages.

These are some of the types of Jackson modules and extensions that are especially helpful:

  • Data format modules
  • Data type modules
  • Modules for Kotlin, Scala, and Clojure

Data format modules

The data format modules allow you to read and write content that is encoded in formats other than JSON. The low-level encoding details of JSON differ from those of YAML, CSV, or Protobuf. However, much of the higher-level data-binding functionality is similar or identical. The higher-level data binding deals with the structure of Java Objects and expressing them as token streams (or a similar abstraction).

Most of the code in the Jackson core is format-independent and only a small part is truly JSON-format specific. So, these so-called data format modules can easily extend Jackson to read and write content in other data formats. The modules implement low-level streaming from the Jackson API, but they share common data-binding functionality when it comes to converting content to and from Java objects.

In the implementation of these modules, you find factories for streaming parsers and generators. They use a specific factory to construct an ObjectMapper that handles format-specific details, while you only interact with a (mostly) format-agnostic mapper abstraction.

The very first data format module added to Jackson is jackson-dataformat-xml for XML. Supported data formats now include:

The usage for data format modules is similar to the Jackson API JSON usage. You only need to change  JsonMapper (or the generic ObjectMapper) to a format-specific alternative like XmlMapper. There are format-specific features that you can enable and disable. Also, some data formats require additional schema information to map content into JSON-like token representation, such as Avro, CSV and Protobuf. But across all formats, the API usage is similar.

Examples

Compared to the previous example that simply reads and writes JSON, these are alternatives for reading and writing other data formats:

// XML usage is almost identical except it uses a different mapper object
ObjectMapper xmlMapper = new XmlMapper();
String doc = xmlMapper.writeValueAsString(user);
User user = xmlMapper.readValue(doc, User.class);

// YAML usage is almost identical except it uses a different mapper object
ObjectMapper yamlMapper = new YAMLMapper();
byte[] asBytes =  yamlMapper.writeValueAsString(user);
User user2 = yamlMapper.readValue(asBytes, User.class);

// CSV requires Schema for most operations (to convert between property
// names and column positions)
// You can compose the schema manually or generate it from POJO.
CsvMapper csvMapper = new CsvMapper();
CsvSchema userSchema = csvMapper.schemaFor(User.class);
String csvDoc = csvMapper.writer(schema)
.writeValueAsString(user);
User user3 = csvMapper.readerFor(User.class)
.with(schema)
.readValue(csvDoc);

Data type modules

Jackson core allows you to read and write Plain Old Java Objects (POJOs) and most standard JDK types (strings, numbers, booleans, arrays, collections, maps, date, calendar, URLs, and UUIDs). However, many Java projects also use value types defined by third-party libraries, such as Guava, Hibernate and Joda. It doesn’t work well if you handle instances of these value types as simple POJOs, especially collection types. Without explicit support, you would have to implement your own handlers as serializers and deserializers to extend Jackson functionality. It’s a huge undertaking to add such explicit support in Jackson core for even the most common Java libraries and could also lead to problematic dependencies.

To solve this challenge, Jackson added a concept called data type modules. These are extensions built and packaged separately from core Jackson and added to the ObjectMapper instance during construction. Data type modules are released by both the Jackson team and external contributors such as authors of third-party libraries. Authors usually add these modules because they want to solve a specific use case and then share the fruits of their labors with others.

Due to the pluggability of these modules, it is possible to use data type modules with different formats and to mix different value types. For example, you can read and write Hibernate-backed Guava ImmutableLists that contain Joda-defined Period values as JSON, CSV, or XML.

The list of known data type modules is long—see Jackson Portal. Here are some examples:

In addition to the data type module implementations, many frameworks also directly support Jackson data type module usage. In particular, various “immutable values” libraries offer such support, such as:

Modules for Kotlin, Scala, Clojure

If you’re using Jackson, you’re not limited to only using POJOs and JDK types with other JVM languages. Jackson has extensions to handle custom types of many other JVM languages.

The following Jackson modules support Kotlin and Scala.

And for Clojure, there are a few libraries that use Jackson under the hood to implement similar support, such as:

This simplifies interoperability further. It makes it easier to use Java functionality from Kotlin, Scala, Clojure, or vice versa.

What’s next

In my next post, I will share my observations on the challenges that the Jackson project faces and how you can contribute to help.


About the author

Tatu Saloranta is a staff software engineer at Indeed. He leads the team that is integrating the next-generation continuous deployment system. Outside of Indeed, he is best known for his open source activities. Under the handle @cowtowncoder, he has authored many popular open source Java libraries, such as Jackson, Woodstox, lzf compression codec, and Java ClassMate. For a complete list see FasterXML Org.

Cross-posted on Medium.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on RedditEmail this to someone