So, Where Do You Work?

October 16th, 2006

A post at /. points out a USA Today article that notes a growing trend where people perform their jobs at places other than home and office. I realized that I have been using Starbucks as a makeshift office months ago. In general, being able to work at alternative places during atypical hours increases productivity. Sometimes, people just need a change of work environment to boost production.

The ability to allow workers to work remotely demonstrates a company’s agility. Monolithic companies tend be very traditional and seem to resist novel work practices. Workers who perform their jobs remotely are highly autonomous. Because people tend to depend on the work of others in order to carry out their own jobs, delivery of results from a remote worker’s efforts serves as an inherent mechanism for team synchronization. This synchronization helps maintain productivity without needing a traditional level of supervision.

Supervising remote workers may be challenging to armchair managers, who depend on visual cues to deduce productivity. Managers, who believe that the appearance of being busy is indicative of production, have trouble in assessing the benefit of worker effort when the worker is not visible. Judging the productivity of workers that do not work at the office may prove difficult for such managers.

As suggested earlier, the key measurement of performance is production. ‘Business,’ or the state of being busy, neither builds nor supports companies. Companies require results. After seeing consistently increased results from workers who work remotely, the practice of working at a third place will become accepted by more companies.

Celebrating Software Modularity

September 7th, 2006

I have recently been involved in the modification of several software systems. In one system, service provider preferences changed, which required code that interfaced with the new provider to be implemented. The presence of engineered software architecture simplifies change. A desirable architecture for a system with uncontrollable external dependencies facilitates changes that are localized to specific parts of the system.

One company wanted to use a hosted Microsoft Exchange Server solution, but their proprietary system depended on an in-house mail server. In this case, the system depended on a single mail component, and implementation of a component that worked with the hosted solutions provider and exposed the same interface to the proprietary system was all that was needed to make the transition. Code that had a direct dependency on the mail component required minor modification of references to the old component. All other code in the system was unaffected.

There are situations when elegantly engineered software architectures cannot be devised and deployed. A client demanding that a system be delivered earlier than completion estimates provided by engineers is such a situation. If a business deal depends on meeting client deadlines, then an engineer may opt to evolve a prototype into the production system. After meeting the deadline, the engineer may want to refactor the system into an architecture that is well thought out. This allows the engineer to make future system changes easily. Refactoring is an incremental approach toward elegant software design.

An excellent software engineer can change the core system into a more elegant architecture without changing the end-user experience. Being able to repeatedly perform this exercise easily indicates modularization of the user interface and the core system. The core system can be developed without affecting the end-user interface, and it can be deployed for many users without affecting their respective user interfaces. The benefits of modularization can be extended to software dependents. Excellent software architecture permits changes to software components without requiring massive changes to the overall system. Excellent software architecture makes effective use of software component modularization. Ease of change is a good reason to invest time in developing software architecture before implementation of the production system.

UCI’s School of ICS Makes Slashdot

July 29th, 2006
Posted in - blah - | No Comments

UCI’s School of Informaiton & Computer Science made Slashdot today for their research in topic modeling. Organizing a large number of texts in such a way that mining the collection for useful information can be done effeciently is a challenging task. Managing Gigabytes: Compressing and Indexing Documents and Images has been on my Amazon Wish List for some time. I’m fairly certain that it will make an interesting read for those days at the park, beach, or coffee shop.

Happy SysAdmin Day!

July 27th, 2006

Thanks to Mike Marquez for pointing out to me that SysAdmin Day is tomorrow, July 28. Happy SysAdmin Day to Eric of MSA, Andrew of ISM, Jay of Opt3, the people at our co-lo, and to everyone else that is “In the Trenches (IT).” When the shit hits the fan, technically speaking, we appreciate the peace of mind that comes with knowing that you have it covered.

Considering Specific Location Risks

July 23rd, 2006

A data center in Downtown Los Angeles may boast about the multiple backup power generators they possess, but is it enough to deal with power problems that span the whole city for multiple days? Although problem scenarios may seem farfetched when an information architecture is functioning normally, problems do occur as exemplified by the extended power outage around New York City. Consolidated Edison, a 10 billion dollar company, was unable to provide highly available service. A majority of co-locations do not spend as much as ConEd to create an environment that supports high availability, and a majority of companies are not willing spend too much on minimizing downtime.

To minimize the cost while maximizing service availability, a solution for providing high availability should include at least two data centers that are in separate geographic regions. This is more advisable than building up a single super data center. As an example at a smaller scale, buying two servers and configuring them for high availability is cheaper and safer than buying a single server that is supposedly fault tolerant. A single server, by definition, cannot provide redundancy that is needed to foster high availability. When dealing with high availability, scaling out is simply better than scaling up.