What has kept me up at night for some time now is deciding when the right time is to optimize software performance. If there is no immediate value for the client, how do I justify addressing the performance problems of a system before it’s needed?
Too often, the path developers take is to wait it out. Many project teams will take a “good enough” approach, citing that there is no immediate value brought to the client or there are other more pressing features to develop as reasons to push off these enhancements. So many libraries and standard conventions are used to address performance, but the original setup used to manage performance may no longer be sufficient with changing requirements, added features, and growing datasets. Once performance hinders the client, working on response batching, view model trimming, or query optimization becomes the priority again.
Benefits of Proactive Code Optimization
By taking a proactive approach to code optimization, you may be able to avoid this altogether. Being proactive helps:
- Scalability – Optimizing system performance readies the client/project for success. Because you understand where they’re at and where they’re going, you’ll be able to get them ready to grow and won’t be why they have to slow down.
- User Experience – Code optimization can help provide a seamless experience from beginning to end, not held up by loading screens. Waiting for code performance to be a problem means the user experience is slowed while optimization takes place.
- Building Trust – Anticipating the future needs of a system will prevent an egg-on-your-face experience with the client, and push you toward a consultancy mindset. Don’t be the one that could have prevented an issue. Do the due diligence now to be able to be a technical advisor in the future.
When To Be Proactive
But how do you know when you should be proactive in code optimization or if it’s premature? Ask yourself these questions:
1. Do I understand the code?
When working with any data, developers should have performance in the back of their minds. Standard conventions could be incorporated to make performance problems something that may never need to be addressed. Are you needlessly loading extra data in your view model? Is your query optimized to only load the data that is being requested? If you’re paginating, are you paginating on the backend? Can you batch the request if it is a large dataset? These options, and many others, can be addressed with minimal effort before ever delivering a feature to the client and can postpone needing to address performance (maybe indefinitely).
Note: Even the most minor changes, such as a simple table join, can cause drastic performance issues within an application. Changes should be backed with testing in a deployed environment to verify performance did not fall outside of the acceptable threshold, regardless of whether or not the intent was code optimization. Never assume following some convention or using a well-known library will have the expected output when combined with your code.
2. Do I understand the scaling requirements of the business?
If you’re working on a feature that needs to scale, you need to understand what will happen to the dataset before making improvements. It may work great in testing, but will the dataset grow to 10x the test set? 100x? 1,000x? At what point will performance no longer be acceptable? If the feature is not made to scale to the client’s needs in the immediate future, the work needs to be done now.
On the other hand, the client’s needs may not scale until the distant future (say they are merging with another company, so their dataset will double in a year, or they plan to expand to 4x the cities they currently are in over the next five years). In that case, it could meet acceptance criteria now, but at the very least, a backlog item should be made to address the client’s needs before it becomes a hindrance. Consider working with the client to determine what the acceptance criteria is around performance for more at-risk endpoints, and setup a test case that will determine when the current application response time gets within a certain threshold of what is acceptable to the client, and requires more immediate attention.
3. Do I understand the plan of attack? Have I communicated the plan to the team and client?
Let’s say that the dataset is highly complicated, reasonable attention has been paid to optimizing the code, and the business needs won’t change for some time. You understand the needs of the business, and you understand what approach is being taken. In that case, you should be able to communicate to the client if the system’s performance will meet their needs now, and how much work, if any, will need to be done to meet their needs in the future.
Considering all of this, there is a time and a place for optimizing code. Even if it feels premature or the time to do it is not right now, code optimization should be a relevant concern and top of mind before the performance of the system becomes business-critical.