I put in a brief but productive appearance at the DAMA International Symposium and Wilshire Meta-Data Conference running this week in San Diego. This is THE event for people who care passionately about topics like “A Semantic-Driven Application for Master Data Management” and “Dimensional-Modeling – Alternative Designs for Slowly Changing Dimensions”. As you might imagine, there aren't that many of them, and it’s always clear that the attendees revel in spending time with others in their field. I’m sure there are some hilarious data modeling jokes making the rounds at the show, but I wasn’t able to stick around long enough to hear any.
One of the few sessions I did catch was a keynote by Gartner Vice President Michael Blechar. His specific topic was the impact of a services-driven architecture on data management, with the general point being that services depend on data being easily available for many different purposes, rather than tied to individual applications as in the past. This means that the data feeding into those services must be redesigned to fit this broader set of uses.
In any case, what struck me was Blechar’s statement the fundamental way I’ve always thought about systems is now obsolete. I've always thought that systems do three basic things: accept inputs, process them, and create outputs. This doesn't apply in the new world, where services are strung together to handle specific processes. The services themselves handle the data gathering, processing and outputs, so these occur repeatedly as the process moves from one service to another. (Of course, a system can still have its own processing logic that exists outside a service.) But what’s really new is that a single service may be used in several different processes. This means that services are not simply components within a particular proces or system: they have an independent existence of their own.
Exactly how you create and manage these process-independent services is a bit of a mystery to me. After all, you still have to know they will meet the requirements of whatever processes will use them. Presumably this means those requirements must be developed the old fashioned way: by defining the process flow in detail. Any subtle differences in what separate processes need from the same service must be accommodated either by making the service more flexible (for example, adding some parameters that specify how it will function in a particular case) or by adding specialized processing outside of the service. I'll assume that the people who worry about these things for a living must have recognized this long ago and worked out their answers.
What matters to me is what an end-user can do once these services exist. Blechar argued that users now view their desktop as a “composition platform” that combines many different services and uses the results to orchestrate business processes. He saw executive dashboards in particular as evolving from business intelligence systems (based on a data warehouse or data mart) to business activity monitoring based on the production systems themselves. This closer connection to actual activity would in turn allow the systems to be more “context aware”—for example, issuing alerts and taking actions based on current workloads and performance levels.
Come to think of it, my last post discussed eglue and others doing exactly this to manage customer interactions. What a more comprehensive set of services should do is make it easer to set up this type of context-aware decision making.
Somewhat along these same lines, Computerworld this morning describes another Gartner paper arguing that IT’s role in business intelligence will be “marginalized” by end-users creating their own business intelligence systems using tools like enterprise search, visualization and in-memory analytics (hello, QlikView!). The four reader comments posted so far have been not-so-politely skeptical of this notion, basically because they feel IT will still do all the heavy lifting of building the databases that provide information for these user-built systems. This is correct as far as it goes, but it misses the point that IT will be exposing this data as services for operational reasons anyway. In that case, no additional IT work is needed to make it available for business intelligence. Once end-users have self-service tools to access and analyze the data provided by these operational services, business intelligence systems would emerge without IT involvement. I'd say that counts as "marginalized".
Just to bring these thoughts full circle: this means that designing business intelligence systems with the old “define inputs, define processes, define outputs” model would indeed be obsolete. The inputs would already be available as services, while the processes and outputs would be created simultaneously in end-user tools. I’m not quite sure I really believe this is going to happen, but it’s definitely food for thought.
Thursday, 20 March 2008
Service Oriented Architectures Might Really Change Everything
Posted on 10:08 by Unknown
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment