Pray for Better, Prepare for Worse

October 21st, 2006

All things mechanical will fail. A lack of sound disaster recovery procedures should keep a knowledgeable IT administrator awake at night. Measures to prevent data loss are needed by many recovery scenarios and are a worthwhile vehicle to discuss the overall need to practice disaster recovery procedures.

Data backups are a key component of disaster recovery. Recovering from the failure of a complex system requires planning and training. IT administrators and operators should not be comfortable with simply deploying backup software. IT operators need to become comfortable only through continual practice of recovery procedures. They will also be better prepared to carry them out under the pressure of a real system failure. Recovering data will become less of an exceptional task and more routine.

Having IT operators routinely perform backups and recoveries will allow the operators to check the backup hardware. Checking the recorded data and verifying its recovery are necessary steps toward preparedness. Checking that data from older systems can be recovered onto newer systems may also be important. IT operators should be practicing different types of recoveries to verify that data can be recovered for different scenarios.

The routine practice of backup and recovery procedures also verifies the completeness and correctness of the recovery plans. It also provides the opportunity to measure their effectiveness and performance. Key measurements include the time needed for recovery as well as the amount of data lost between the most recent backup and the point of failure. Repeated validation of the procedures will provide opportunities for refinement. Special cases in the recovery process should be minimized for each system. Minimizing both time for recovery and loss of data can be worked on indefinitely.

Plan. Deploy. Practice. Repeat.

Technology that enhances disaster recovery preparedness continues to evolve. For example, file systems that help provide consistent snapshots for backups are being deployed. Adopting such systems may complicate recovery procedures. Trouble with a Logical Volume Manager when performing a bare metal (“nuke”) restoration with a rescue CD is a possible problem that may be discovered only through practice.

But You’re Management!

October 20th, 2006

The desire of managers to be involved with technology is natural. They want to seem able to learn quickly and adapt. After all, failing to use new technologies puts a company at a disadvantage against its competitors. Effective use of technology can develop a small upstart into a best-of-breed market dominator. Management sees the potential power that technology has on the future of a business, and they are driven to master it before they are ousted through company politics by someone who is more able.

People, who are good at management, tend to ask more questions than suppositions when technical issues are not well understood. They are also able to provide complete, consistent, and detailed descriptions on what is technically desired. Although some entrepreneurs may have relied on gut instinct and got lucky, good managers develop instruments of analysis and employ measurements from these tools to form consistently good business decisions.

There are others in management who make assumptions. They raise vague technical issues, and they create an illusion of importance for these issues by keeping them vague and aligning them with general technical concerns. They worry about technology’s accessibility purely on intuition and arrogance. “We are like our target audience, and we cannot wield this technology, therefore this technology is inaccessible to our target audience,” is a logical deduction that is made commonly by weak managers.

Rather than make changes without measurements of performance, persuading people with weak management skills to gather information that supports their proposals is advisable. This helps weak managers to become more data-dependent. Analyzing performance data or deploying tools for such measurements is better than investing effort on changes based on personal intuition. Evaluating the outcome of any changes is difficult without proper tools in place.

Lead through suggestion, especially when not in a position to lead.

So, Where Do You Work?

October 16th, 2006

A post at /. points out a USA Today article that notes a growing trend where people perform their jobs at places other than home and office. I realized that I have been using Starbucks as a makeshift office months ago. In general, being able to work at alternative places during atypical hours increases productivity. Sometimes, people just need a change of work environment to boost production.

The ability to allow workers to work remotely demonstrates a company’s agility. Monolithic companies tend be very traditional and seem to resist novel work practices. Workers who perform their jobs remotely are highly autonomous. Because people tend to depend on the work of others in order to carry out their own jobs, delivery of results from a remote worker’s efforts serves as an inherent mechanism for team synchronization. This synchronization helps maintain productivity without needing a traditional level of supervision.

Supervising remote workers may be challenging to armchair managers, who depend on visual cues to deduce productivity. Managers, who believe that the appearance of being busy is indicative of production, have trouble in assessing the benefit of worker effort when the worker is not visible. Judging the productivity of workers that do not work at the office may prove difficult for such managers.

As suggested earlier, the key measurement of performance is production. ‘Business,’ or the state of being busy, neither builds nor supports companies. Companies require results. After seeing consistently increased results from workers who work remotely, the practice of working at a third place will become accepted by more companies.

Celebrating Software Modularity

September 7th, 2006

I have recently been involved in the modification of several software systems. In one system, service provider preferences changed, which required code that interfaced with the new provider to be implemented. The presence of engineered software architecture simplifies change. A desirable architecture for a system with uncontrollable external dependencies facilitates changes that are localized to specific parts of the system.

One company wanted to use a hosted Microsoft Exchange Server solution, but their proprietary system depended on an in-house mail server. In this case, the system depended on a single mail component, and implementation of a component that worked with the hosted solutions provider and exposed the same interface to the proprietary system was all that was needed to make the transition. Code that had a direct dependency on the mail component required minor modification of references to the old component. All other code in the system was unaffected.

There are situations when elegantly engineered software architectures cannot be devised and deployed. A client demanding that a system be delivered earlier than completion estimates provided by engineers is such a situation. If a business deal depends on meeting client deadlines, then an engineer may opt to evolve a prototype into the production system. After meeting the deadline, the engineer may want to refactor the system into an architecture that is well thought out. This allows the engineer to make future system changes easily. Refactoring is an incremental approach toward elegant software design.

An excellent software engineer can change the core system into a more elegant architecture without changing the end-user experience. Being able to repeatedly perform this exercise easily indicates modularization of the user interface and the core system. The core system can be developed without affecting the end-user interface, and it can be deployed for many users without affecting their respective user interfaces. The benefits of modularization can be extended to software dependents. Excellent software architecture permits changes to software components without requiring massive changes to the overall system. Excellent software architecture makes effective use of software component modularization. Ease of change is a good reason to invest time in developing software architecture before implementation of the production system.

UCI’s School of ICS Makes Slashdot

July 29th, 2006
Posted in - blah - | No Comments

UCI’s School of Informaiton & Computer Science made Slashdot today for their research in topic modeling. Organizing a large number of texts in such a way that mining the collection for useful information can be done effeciently is a challenging task. Managing Gigabytes: Compressing and Indexing Documents and Images has been on my Amazon Wish List for some time. I’m fairly certain that it will make an interesting read for those days at the park, beach, or coffee shop.