Linux Kernel Development

July 16th, 2012

Picture of book cover for Linux Kernel Development by Robert LoveFor nearly a decade, I have had a latent interest in developing the Linux kernel. I administered Linux servers for years, but I never had as much motivation as I have now to explore the kernel’s mechanisms. I remember leafing through books, at Borders, that made kernel development seem inaccessible. Robert Love’s book, Linux Kernel Development (third edition), stands out as an invitation to exploring and improving the open source operating system. Love’s book is a great introduction to Linux and its subsystems, and it has encouraged me to study the operating system’s implementation.

Love’s writing style makes the topic of Linux kernel development accessible to intermediate software developers. His style is clear, concise, and effective. It differs from the style used in voluminous books, which materialize publishers’ apparent hope to attract shoppers by taking up more shelf space at bookstores. The style of his writing allows Love to convey information in less words thereby saving the reader’s time. Love’s style focuses the reader’s attention on the Linux operating system, and his style allows the reader to pick up knowledge quickly.

Linux Kernel Development introduces process management, interrupt handling, memory management, and i/o handling as implemented in recent versions of Linux. To support examination of these primary operating system functions, the book also reviews data structures and thread synchronization mechanisms that are used by the kernel. The book delegates to others focused on algorithms the task of deeply examining data structures, but it provides thorough coverage of synchronization mechanisms used by Linux and highlights the strengths and limitations of each mechanism. The book maintains its focus on Linux by expanding study of topics that are specific to Linux and avoiding distractions of general topics such as data structure implementation.

Its organization, structure, and style makes Robert Love’s book a potent introduction to linux kernel development.

New eReader Acquired!

May 15th, 2012

NOOK Simple TouchWhenever I run errands, such as having my car serviced or getting a haircut, I usually bring a book with me for the wait. I already had a desktop, laptop, netbook, and smartphone, which are used daily. I figured that I can use my netbook or smartphone to read ebooks. The need for an ereader was not apparent.

A presenter of an education series presentation at my company exposed me to observed computing trends. In a world of smartphones and tablets, ereaders can fulfill a need between those device classes or be considered complementary to tablets. Tablets and ereaders have similar form factors, and using one as the other is tempting, but each should be used for their specific functions. Tablets are good for light computing, such as browsing the Internet or checking email. Ereaders are excellent for lengthy reading sessions.

I was presented an opportunity to purchase an ereader on my most recent of countless trips to Barnes & Noble. The NOOK Simple Touch and NOOK Color were on sale for Mother’s Day. The NOOK Simple Touch, at 20% off, was the only device that I considered. With the store closing and sale ending, knowing that I could return it within 14 days, I extended the time for adopting the device by making the purchase.

Compatibility with my Dell Mini9, running Ubuntu 12,04, was the first thing I checked as I charged the NOOK. After I registered my NOOK, I was able to load The PostScript® Language Tutorial and Cookbook in PDF from Adobe.com. The NOOK behaved like a USB drive when it was connected to my netbook, and I was able to copy the PDF onto the ereader with ease.

The NOOK is expected not to require being charged for two months when reading daily for 30 minutes. This is certainly better than charging a netbook or tablet daily when using it as an ereader. The NOOK is light. It is compact. It feels sturdy. The NOOK is a keeper.

Optimization

December 16th, 2011

Lately, I have been involved with optimizing code to improve execution time. I am still becoming familiar with a large and complex software system that is used by a multitude of end-users. Because my knowledge of the system is limited and people are dependent on the system, the scope of my modifications is focused on functions rather than components, modules, or subsystems. Profiler results present candidate functions that may benefit from optimization. The profiler results that I have encountered in my optimization efforts are consistent with the 80-20 rule, where 20 percent of the code is responsible for 80 percent of the execution time. These are the functions that I focused on, since they had the greatest chance of improvement.

I have had to optimize code toward desired characteristics in other projects. In other projects, I have had to optimize for space as was done in the implementation of memory built-in self tests for custom peripherals on an Infineon chip. Space was scarce in this situation, and it was fortunate to have control words, which manipulate the custom peripherals, that allowed the use of simple compression methods. Though the decompression of the custom peripheral instructions added to the execution time, the added execution time was acceptable as it allowed the built-in self test code and data to fit in the limited memory space.

In another situation that called for optimization, a tradeoff between data NVM and instruction NVM was present. In a Harvard architecture, data and instruction memory are separate. This is different from architectures that allow execution of code in memory that is also used for data. When the data NVM neared exhaustion in a previous experience, minimization or optimization of data NVM use became required. In this situation, data NVM usage was minimized by initializing variables with variable assignments. The variable assignments were implemented by the processor with “load immediate value” operations that placed the desired values into variables represented as registers. The variable assignment code resided in instruction NVM as opposed to copy sections that reside in data NVM.

At times, there is a need to sacrifice space for reduced execution times. Sometimes, space is scarce and an increase in execution time is acceptable. Other times, there is an exchange between the memory types. There is also a possibility that optimizations result in reduced space and execution time. In my view, optimization can be seen as the management of tradeoffs.

With the gains achieved in my current optimization efforts, I am wary of the possibility that premature optimization may be considered implicitly encouraged. As Knuth is credited in saying, “premature optimization is the root of all evil (or at least most of it) in programming.” A lot of time may be wasted by optimizing a solution that is later replaced. A great amount of time may also be wasted in optimizing code that consumes a very small percentage of the system’s resources. Premature optimization may also introduce complication or complexity to the implementation that adds difficulty to completion of the system and decreases the feasibility of meeting schedule constraints.

Although I disfavor premature optimization, the adoption of RandomSort() or approaches of similar quality with respect to their situations should be avoided. The optimization of an inefficient approach to a problem is a totally inappropriate allocation of valuable resources. A balance between premature optimization and efficient algorithm selection must be maintained. When it comes to a decision between a prematurely optimized solution and one that is good enough, I encourage the adoption of a good enough solution that is implemented in such a way that an optimized solution can be easily substituted if a need arises. This allows timely implementation of the system as well as a starting point for profiling and incremental refinement.

RandomSort() is a function that shuffles the elements of a collection randomly, and checks if the elements are ordered. The process is repeated until the elements are ordered.

Potential HP Remote Exploit

December 8th, 2011
Posted in Security | No Comments

I remember my first computer. It was a standalone system that booted directly to the Microsoft DOS prompt. There were no logins nor passwords. There were no real security measures to protect files on disk or processes in memory. Floppy disks were the primary means of transferring data between computers. Bulletin board systems were popular at the time, but without a modem, a computer could not be connected. There was no concern, aside from a burglar taking my computer, of data theft and security.

As more and more devices are added to the Internet, security becomes increasingly important. There has been news lately of the potential for remote exploits in HP printers. The addition of printers, phones, coffee makers, security systems, and other devices to a network enlarges the attack surface for that network. A larger attack surface provides an adversary with more potential vulnerabilities for exploitation. An HP representative can downplay the possibility of using malicious code to remotely start a fire, but this should not distract people from the fact that malicious code can do destructive things with less fanfare such as silently forwarding copies of confidential information to adversaries or identity thieves. It is important that network devices, specifically printers, keep up with current security practices. And, it is important that we continue to implement systems that are secure by default.

Wherever the Wind Blows

August 24th, 2011

The winds of change have scattered my team among several companies. The cohesive forces experienced between team members at my previous company have resulted in groups of people finding themselves working in new environments with others of our previous company. Two former teammates have relocated to a semiconductor company, and two others took up positions at an aerospace company. I joined two colleagues from my previous company to be a part of another company. Other groups have similarly formed, and they work together at other companies.

Having been at a small technology company has allowed each team member to develop skills that are highly marketable. With each team member interested in developing their skills, an arrangement evolved where an engineer earned experience in all phases of the development life cycle. The software development team, for example, rotated the responsibilities of designing, implementing, and testing software components. At some point during the projects, the latest additions to our team would be responsible for critical and complex parts of the systems. Each member gained experience delegating work and tracking the progress of their modules. The team as a whole was also responsible for integrating these software components to yield the final deliverable products. This environment, with its opportunities for self-development and enhancement of professional maturity, was fostered by an effective managerial style practiced by experienced management.

I have interviewed at numerous companies, and received several offers. My interview experience and results are shared with many of my former teammates. The former company left us in a stronger position that allowed us to select our next company among those that would give us opportunities. While I was engaged in interviewing, I was interested in the team that I would be joining, the type of projects, and the environment overall.

I am thankful that many of the people with whom I interviewed were very candid. One company explained to me that being at the office for at least 60 hours per week as well as being reachable while away from the office was expected and typical. Another company informed me of their lack of process and documentation, and gave me a warning about a challenging transfer of knowledge and steep learning curve. The management of this company expressed interest in increasing formalism, and I was willing to be accountable for effecting positive change, but the environment seemed too challenging to encourage change without real authority. Ultimately, my selection of an opportunity depended on the team, the company, and my interests. My former teammates have also made their selections based on their preferences.

It is here, then, that I wish my former team members the best of luck in their endeavors. I hope that their careers lead to achievement, personal fulfillment, and betterment of society as a whole. And, I would strongly consider taking the opportunity to work with any of my former team members in the future.