Join | Renew | Donate
Stay on top of all DAMA-RMC news and announcements here.
Data flows can be documented at different levels of detail: Subject Area, business entity, or even the attribute level. Systems can be represented by network segments, platforms, common application sets, or individual servers. Data flows can be represented by two-dimensional matrices (last week's figure) or in data flow diagrams (this figure).
A matrix gives a clear overview of what data the processes create and use. The benefits of showing the data requirements in a matrix is that it takes into consideration that data does not flow in only one direction; the data exchange between processes are many-to-many in a quite complex way, where any data may appear anywhere. In addition, a matrix can be used to clarify the processes' data acquisition responsibilities and the data dependencies between the processes, which in turn improves the process documentation. Those who prefer working with business capabilities could show this in the same way - just exchanging the processes axis to capabilities. Building such matrices is a long-standing practice in enterprise modeling. IBM introduced this practice in its Business Systems Planning (BSP) method. James Martin later popularized it in his Information Systems Planning (ISP) method during the 1980s.
The data flow in this figure is a traditional high-level data flow diagram depicting what kind of data flows between systems. Such diagrams can be described in many formats and detail levels.
Data flows are a type of data lineage documentation that depicts how data moves through business processes and systems. End-to-end data flows illustrate where the data originated, where it is stored and used, and how it is transformed as it moves inside and between diverse processes and systems. Data lineage analysis can help explain the state of data at a given point in the data flow.
Data flows map and document relationships between data and
Applications within a business process
Data stores or databases in an environment
Network segments (useful for security mapping)
Business roles, depicting which roles have responsibility for creating, updating, using and deleting data (CRUD)
Locations where local differences occur
Data flows can be documented at different levels of detail: Subject Area, business entity, or even the attribute level. Systems can be represented by network segments, platforms, common application sets, or individual servers. Data flows can be represented by two-dimensional matrices (this figure) or in data flow diagrams (next week's figure).
This figure depicts three Subject Area diagrams (simplified examples), each containing a Conceptual Data Model with a set of entities. Relationships may cross Subject Area borders; each entity in an enterprise data model should reside in only one Subject Area, but can be related to entities in any other Subject Area.
Hence, the conceptual enterprise data model is built up by the combination of Subject Area models. The enterprise data model can be built using a top-down approach or using a bottom-up approach. The top-down approach means starting with forming the Subject Areas and then populating them with models. When using a bottom-up approach the Subject Area structure is based on existing data models. A combination of the approaches is usually recommended; starting with bottom-up using existing models and completing the enterprise data model by populating the models by delegating Subject Area modeling to projects.
The Subject Area discriminator (i.e., the principles that form the Subject Area structure) must be consistent throughout the enterprise data model. Frequently used subject area discriminator principles include: using normalization rules, dividing Subject Areas from systems portfolios (i.e., funding), forming Subject Areas from data governance structure and data ownership (organizational), using top-level processes (based on the business value chains), or using business capabilities (enterprise architecture-based). The Subject Area structure is usually most effective for Data Architecture work if it is formed using normalization rules. The normalization process will establish the major entities that carry/constitute each Subject Area.
Some organizations create an Enterprise Data Model (EDM) as a stand-alone artifact. In other organizations, it is understood as composed of data models from different perspectives and at different levels of detail, that consistently describe an organization's understanding of data entities, data attributes, and their relationships across the enterprise. An EDM includes both universal (Enterprise-wide Conceptual and Logical Models) and application- or project-specific data models, along with definitions, specifications, mappings and business rules.
Adopting an industry standard model can jumpstart the process of developing an EDM. These models provide a useful guide and references. However, even if an organization starts with a purchased data model, producing enterprise-wide data models requires a significant investment. Work includes defining and documenting an organization's vocabulary, business rules, and business knowledge. Maintaining and enriching an EDM requires an ongoing commitment of time and effort.
An organization that recognizes the need for an EDM must decide how much time and effort it can devote to building and maintaining it. EDMs can be built at different levels of detail, so resource availability will influence initial scope. Over time, as the needs of the enterprise demand, the scope and level of detail captured within an EDM typically expands. Most successful EDMs are built incrementally and iteratively, using layers. This figure shows how different types of models are related and how conceptual models are ultimately linkable to physical application data models. It distinguishes:
A conceptual overview over the enterprise's subject areas
Views of entities and relationships for each subject area
Detailed, partially attributed logical views of these same subject areas
Logical and physical models specific to an application or project
All levels are part of the EDM, and linkages create paths to trace an entity from top to bottom and between models in the same level.
Vertical: Models in each level map to models in other levels. Model lineage is created using these maps. For example, a table or file MobileDevice in a project-specific physical model may link to a MobileDevice entity in the project-specific logical model, a MobileDevice entity in the Product subject area in the Enterprise Logical Model, a Product conceptual entity in the Product Subject Area Model, and to the Product entity in the Enterprise Conceptual Model.
Horizontal: Entities and relationships may appear in multiple models in the same level; entities in logical models centered on one topic may relate to entities in other topics, marked or noted as external to the subject area on the model images. A Product Part entity may appear in the Product subject area models and in the Sales Order, Inventory, and Marketing subject areas, related as external links.
An enterprise data model at all levels is developed suing data modeling techniques.
The most well-known enterprise architectural framework, the Zachman Framework, was developed by John A. Zachman in the 1980s. It has continued to evolve. Zachman recognized that in creating buildings, airplanes, enterprises, value chains, projects, or systems, there are many audiences, and each has a different perspective about architecture. He applied this concept to the requirements for different types and levels of architecture within an enterprise.
The Zachman framework is an ontology - the 6 x 6 matrix comprises the complete set of models required to describe an enterprise and the relationships between them. It does not define how to create the models. It simply shows what models should exist.
The two dimensions in the matrix framework are the communication interrogatives (i.e., what, how, where, who, when, why) as columns and the reification transformations (Identification, Definition, Representation, Specification, Configuration, and Instantiation) as rows. The framework classifications are represented by the cells (the intersection between the interrogatives and the transformations). Each cell in the Zachman framework represents a unique type of design artifact.
Communication interrogatives are the fundamental questions that can be asked about any entity. Translated to enterprise architecture, the columns can be understood as follows:
What (the inventory column): Entities used to build the architecture
How (the process column): Activities performed
Where (the distribution column): Business location and technology location
Who (the responsibility column): Roles and organizations
When (the timing column): Intervals, events, cycles, and schedules
Why (the motivation column): Goals, strategies, and means
Reification transformations represent the steps necessary to translate an abstract idea into a concrete instance (an instantiation). These are represented in the rows: planner, owner, designer, builder, implementer and user. Each has a different perspective on the overall process and different problems to solve. These perspectives are depicted as rows. For example, each perspective has a different relation to the What (inventory or data) column:
The executive perspective (business context): Lists of business elements defining scope in identification models.
The business management perspective (business concepts): Clarification of the relationships between business concepts defined by Executive Leaders as Owners in definition models.
The architect perspective (business logic): System logical models detailing system requirements and unconstrained design represented by Architects as Designers in representation models.
The engineer perspective (business physics): Physical models optimizing the design for implementation for specific use under the constraints of specific technology, people, costs, and timeframes specified by Engineers as Builders in specifications models.
The technician perspective (component assemblies): A technology-specific, out-of-context view of how components are assembled and operated configured by Technicians as Implementers in configuration models.
The user perspective (operations classes): Actual functioning instances used by Workers as Participants. There are no models in this perspective.
As noted previously, each cell in the Zachman Framework represents a unique type of design artifact, defined by the intersection of its row and column. Each artifact represents how the specific perspective answers the fundamental questions.
October 2023 Newsletter.pdf
Welcome New Members!
We'd like to welcome our 20 new Professional Members who have joined the chapter in Q2 & Q2 2023. We're thrilled you're here and hope you are enjoying all the perks of membership!
We also warmly welcome our 60 new Guest Members! We're excited you're here and hope you explore and find a way to connect with our community at an upcoming event.
Primary Data Architecture outcomes include:
Data storage and processing requirements
Designs of structures and plans that meet the current and long-term data requirements of the enterprise
Architects seek to design in a way that brings value to the organization. this values comes through an optimal technical footprint, operational and project efficiencies, and the increased ability of the organization to use its data. to get there requires good design, planning, and the ability to ensure that the designs and plans are executed effectively.
To reach these goals, Data Architects define and maintain specifications that:
Define the current state of data in the organization
Provide a standard business vocabulary for data and components
Align Data Architecture with enterprise strategy and business architecture
Express strategic data requirements
Outline high-level integrated designs to meet these requirements
Integrate with overall enterprise architecture roadmap
An overall Data Architecture practice includes:
Using Data Architecture artifacts (master blueprints) to define data requirements, guide data integration, control data assets, and align data investments with business strategy
Collaborating with, learning from and influencing various stakeholders that are engaged with improving the business or IT systems development
Using Data Architecture to establish the semantics of an enterprise, via a common business vocabulary
DAMA-RMC is looking for guest bloggers to be featured on our website, and in our newsletters and social media posts. This is a great opportunity to grow your network and reach thousands of new contacts sharing your data knowledge and expertise. Interested bloggers can reach out to Cher Fox, VP of Marketing at MarketingVP@damarmc.org.
Details for submission and publishing are as follows:
For a range of submission topics, please refer to the DAMA Wheel:
Thank you for your interest in being a guest blogger for DAMA-RMC.
Issue Management is the process for identifying, quantifying, prioritizing, and resolving data governance-related issues, including:
Authority: Questions regarding decision rights and procedures
Change management escalations: Issues arising from the change management process
Compliance: Issues with meeting compliance requirements
Conflicts: Conflicting policies, procedures, business rules, names, definitions, standards, architecture, data ownerships and conflicting stakeholder interests in data and information
Conformance: Issue related to conformance to policies, standards, architecture, and procedures
Contracts: Negotiation and review of data sharing agreements, buying and selling data, and cloud storage
Data security and identity: Privacy and confidentiality issues, including breach investigations
Data quality: Detection and resolution of data quality issues, including disasters or security breaches
Many issues can arise locally in Data Stewardship teams. Issues requiring communication and / or escalation must be logged, and may be escalated to the Data Stewardship teams, or higher to the DGC, as shown in this figure. A Data Governance scorecard can be used to identify trends related to issues, such as where within the organization they occur, what their root causes are, etc. Issues that cannot be resolved by the DGC should be escalated to corporate governance and / or management.
Data governance requires control mechanisms and procedures for:
Identifying, capturing, logging, tracking and updating issues
Assignment and tracking of action items
Documenting stakeholder viewpoints and resolution alternatives
Determining, documenting, and communicating issue resolutions
Facilitating objective, neutral discussions where all viewpoints are heard
Escalating issues to higher levels of authority
Data issue management is very important. It builds credibility for the DG team, has direct, positive effects on data consumers, and relieves the burden on production support teams. Solving issue management requires control mechanisms that demonstrate the work effort and impact of resolution.
Featured articles coming soon!
About us| Events | Learn | Join DAMA-RMC| Contacts
© DAMA-RMC 2022