This is the first in a series of articles on implementation advice to optimize the designs and configuration of Oracle Utilities products.
Early in my career my mentor at the time suggested that I expand my knowledge outside the technical area. The idea is that non-technical techniques would augment my technical knowledge. He suggested a series of books and articles that would expand my thinking. Today I treasure those books and articles and regularly reread them to reinforce my skills.
Recently I was chatting to customers about optimizing their interface designs using a techniques typically called "Responsibility Led Design". The principle is basically that each participant in an interface has distinct responsibilities for the data interchanged and it was important to make sure designs took this into account. This reminded me of one of favorite books titled "The One Minute Manager Meets The Monkey" by Ken Blanchard, William Oncken Jr. and Hal Burrows. I even have a copy of the audio version which is both informative and very entertaining. The book was based on a very popular Harvard Review article entitled "Management Time: Who's Got The Monkey" and expands on that original idea.
To paraphrase the article, a monkey is a task that is not your responsibility that is somehow assigned to you. The terms for this is the "monkey jumping on your back" or simply "Monkey on your back". This epitomizes the concepts of responsibility.
So what has this got to with design or even Oracle Utilities products, you might ask?
One of the key designs for all implementation is sending data INTO the Oracle Utilities products. These are inbound interfaces, for obvious reasons. In every interface there is a source application and a target application. The responsibility of the source application is to send valid data to the target application for processing. Now, one of the problems I see with implementations is when the source application sends invalid data to the target. There are two choices in this case:
- Send back the invalid request - This means that if the data transferred from the source in invalid for the target then the target should reject the data and ask the source to resend. Most implementations use various techniques for achieve this. This keeps the target clean of invalid data and ensures the source corrects their data before sending it off again. This is what I call correct behavior.
- Accept the invalid request (usually in a staging area) and correct it within the target for reprocessing - This means the data is accepted by the target. regardless of the error and corrected within the target application to complete the processing.
More and more I am seeing implementations taking the latter design philosophy. This is not efficient as the responsibility for data clensing (the monkey in this context) has jumped on the back of the target application. At this point, the source application has no responsibility for cleaning their own data and has no real incentive to ever send clean data to the target as the target is now has the monkey firmly on their back. This has consequences for the target application as it can increase resource usage (human and non-human) to now correct data errors from the source application. Some of the customers I chatted to found that while initially they found the volume of these types of transactions were low that over time the same errors kept being sent, and over time the cumulative effect of the data clensing on the target started to get out of control. Typically, at this point, customers ask for advice to try and reduce the impact.
In an Oracle Utilities product world, this exhibits itself as a large number of interface To Do's to manage as well as staging records to manage and additional storage to manage. The latter is quite important as typically implementations keep forgetting to remove completed transactions that have been corrected once they have been applied from the staging area. The product ship special purge jobs to remove complete staged transactions and we recently added support for ILM to staging records.
My advice to these customers are:
- Make sure that you assign invalid transactions back to the source application. This will ensure the source application maximizes the quality of their data and also hopefully prevents common transaction errors to reoccur. In other words, the monkey does not jump from the source to the target.
- If you choose to let the monkey jump on the target's back, for any reason, then use staging tables and make sure they are cleaned up to minimize the impact. Monitor the error rates and number of errors and ensure the source application is informed to correct the error trends.
In short, avoid the monkey in your inbound transactions. This will make sure the resources you allocate to both your source and target are responsible and are allocated in an efficient manner.