Sharing Our Passion for Technology
& Continuous Learning
<   Back to Blog

Meditation on Backwards Compatibility

boots walking over a wooden bridge in nature

Meditation on Backwards Compatibility

Product versioning teaches us some unique lessons on software design. While it reinforces some important practices–such as using layers (repository, service, business, etc)–to organize functional pieces, some best practices make versioning more complicated. For example, “Don’t Repeat Yourself” means that multiple versions of the product will share functionalities, making it harder to change or delete a version. Our team recently completed an API migration of a product with two versions. The product contained endpoints in various stages of deprecation. We were initially expected to migrate only the Version Two endpoints. In the end, the partner required that all the endpoints be migrated. So we faced the interesting situation of writing Version Two before writing Version One. Here are some lessons we learned in the process.

Minimize changes

If you’re migrating a code base and in the process make discretionary changes, the underlying assumption is that you know better than the person who wrote it. And while the odds are maybe 50-50 that you could write objectively cleaner code (if such a thing exists), the odds are lower that you know more about the implementation and design of the product than the person who wrote it. I was working with a more experienced teammate, and while I was comfortably renaming files, condensing functions, and pulling out abstractions, he was diligently migrating endpoints piece by piece. As we got more involved in implementation details, the wisdom of his approach became clear. Comparing and contrasting sections of code was much more difficult with the files renamed. Completing PRs for “improved” code required more involved thinking and unnecessary nuance, whereas it could have been compared to a known entity. Future changes to make the code backwards compatible meant building on changes and assumptions which I had made. While taking ownership of the code might have improved readability, it didn’t help with velocity or team ownership.

Generics are marginally useful for versioning

The API we moved had a couple generic functions and types to handle Version One and Version Two typings. While moving Version Two, the generics seemed an unnecessary part of the migration. Before we could finish, the requirements changed and we needed to migrate Version One. As a result, we inherited more complexity than we has anticipated. We had to separate the schema object from the hydratable object, allowing us to work with a more flexible model. And of course, we had to add paths to hydrate different properties based on which endpoint version was called. But we didn’t miss generics or think about adding them back. And in the process of understanding the original codebase, generics didn’t help us understand flow or behavior. While generics do reduce the amount of repeated code, they increase complexity and can make it harder to remove deprecated code.

Code never dies, especially if its death is marked on a calendar

At the start of the project, we were optimistic that we could not only remove Version One, but write out an updated Version Three of the API. The API client was onboard with simplifying both the input and output of the requests. The first reality check came with the realization that coordinating version changes with the client would compromise our delivery speed. We decided we’d lift and drop “as-is.” The second reality check came when another of the API’s clients wasn’t going to make the cutoff date for our API migration to Version Two. We therefore needed to also migrate Version One endpoints. Adding Version One after Version Two was relatively straightforward. We dropped some unrequired properties and made an extra call or two to hydrate an additional property at the query level. Our GraphQL microservice pattern ensured efficient coupling of core functionality across all the endpoints. The two versions were able to share the service layer and certain mapping functions while unique requirements for each endpoint were clearly marked as belonging to a specific version. Ultimately, focusing on interative delivery kept on on schedule and better prepared for changes in requirements.

Set a high standard for unit tests

The project we migrated contained large test fixtures applied to integration level tests, asserting content of requests to external clients and expecting properly hydrated results for range of inputs (options, filters, etc.) and edge cases. Migrating the tests ensured a high degree of confidence in the code’s behavior. We added several more files of unit tests to ensure that stand-alone functions would continue to meet expectations as the project evolved. These unit tests also helped our team take ownership of the code and understand the design and functionality in fine grain. We’ve since successfully updated the functionality to include more dynamic client requests and will soon be working on building out another version of two of the endpoints. Having robust unit tests on every layer facilitates incremental changes as we build up to meet the new requirements.

Conclusion

They say, “Code never dies.” But it does evolve and adapt to requirements and constraints. It’s been a month since the project was migrated, and the transition was accomplished with zero service outages and no behavior/performance issues. We learned that backwards compatibility in a code base benefits from strong project design–clearly delineated layers and minimal coupling. In our experience, a tolerance for some repeated code improved readability and maintenance across versions. As a final thought, open communication within the team and with the API’s clients were important for planning, scheduling, and coordinating milestones. Consistent and reliable contacts ensured that the lessons we learned were part of a productive and efficient process that stayed on schedule and delivered satisfying results.