Lately, I have been involved with optimizing code to improve execution time. I am still becoming familiar with a large and complex software system that is used by a multitude of end-users. Because my knowledge of the system is limited and people are dependent on the system, the scope of my modifications is focused on functions rather than components, modules, or subsystems. Profiler results present candidate functions that may benefit from optimization. The profiler results that I have encountered in my optimization efforts are consistent with the 80-20 rule, where 20 percent of the code is responsible for 80 percent of the execution time. These are the functions that I focused on, since they had the greatest chance of improvement.
I have had to optimize code toward desired characteristics in other projects. In other projects, I have had to optimize for space as was done in the implementation of memory built-in self tests for custom peripherals on an Infineon chip. Space was scarce in this situation, and it was fortunate to have control words, which manipulate the custom peripherals, that allowed the use of simple compression methods. Though the decompression of the custom peripheral instructions added to the execution time, the added execution time was acceptable as it allowed the built-in self test code and data to fit in the limited memory space.
In another situation that called for optimization, a tradeoff between data NVM and instruction NVM was present. In a Harvard architecture, data and instruction memory are separate. This is different from architectures that allow execution of code in memory that is also used for data. When the data NVM neared exhaustion in a previous experience, minimization or optimization of data NVM use became required. In this situation, data NVM usage was minimized by initializing variables with variable assignments. The variable assignments were implemented by the processor with “load immediate value” operations that placed the desired values into variables represented as registers. The variable assignment code resided in instruction NVM as opposed to copy sections that reside in data NVM.
At times, there is a need to sacrifice space for reduced execution times. Sometimes, space is scarce and an increase in execution time is acceptable. Other times, there is an exchange between the memory types. There is also a possibility that optimizations result in reduced space and execution time. In my view, optimization can be seen as the management of tradeoffs.
With the gains achieved in my current optimization efforts, I am wary of the possibility that premature optimization may be considered implicitly encouraged. As Knuth is credited in saying, “premature optimization is the root of all evil (or at least most of it) in programming.” A lot of time may be wasted by optimizing a solution that is later replaced. A great amount of time may also be wasted in optimizing code that consumes a very small percentage of the system’s resources. Premature optimization may also introduce complication or complexity to the implementation that adds difficulty to completion of the system and decreases the feasibility of meeting schedule constraints.
Although I disfavor premature optimization, the adoption of RandomSort() or approaches of similar quality with respect to their situations should be avoided. The optimization of an inefficient approach to a problem is a totally inappropriate allocation of valuable resources. A balance between premature optimization and efficient algorithm selection must be maintained. When it comes to a decision between a prematurely optimized solution and one that is good enough, I encourage the adoption of a good enough solution that is implemented in such a way that an optimized solution can be easily substituted if a need arises. This allows timely implementation of the system as well as a starting point for profiling and incremental refinement.