It was nice to see, in a recent book I have been reading,
some recognition of the usefulness of master data management (MDM), and how the
functions included in data virtualization solutions give users a flexibility in
architecture design that’s worth its weight in gold (IBM and other vendors,
take note). What I have perhaps not sufficiently appreciated in the past is its
usefulness in speeding MDM implementation, as duly noted by the book.
I think that this is because I have assumed that users will
inevitably reinvent the wheel and replicate the functions of data
virtualization in order to afford users a spectrum of choices between putting
all master data in a central database and only there, and leaving the existing
master data right where it is. It now
appears that they have been slow to do so.
And that, in turn, means that the “cache” of a data virtualization
solution can act as that master-data central repository while preserving the
farflung local data that compose it. Or, the data virtualization server can
provide discovery of the components of a customer master-data record, give a
sandbox to define a master data record, alert to new data types that will need
to change the master record, and enforce consistency – all key functions of
such a flexible MDM solution.
But the major value-add is the speedup of implementing the
MDM solution in the first place, by speeding definition of master data, writing
code on top of it for application interfaces, and allowing rapid but safe
testing and upgrade. As the book says, abstraction gives these benefits.
Therefore, it continues to be worth it for both existing and
new MDM implementations to seriously consider data virtualization.
No comments:
Post a Comment