The History of Data Modeling
The programming of computers is an abstract realm of thought. In the ‘70s, it was thought that people would benefit from an increased use in graphic representations. On the side of process, flow charts led to data flow diagrams. Then, in the mid-70s, entity relationship modeling was created as a means of graphically representing data structures.
Entity relationship models are used during the first stage of information system design in order to elucidate types of info that are needed to be stored in the database during the phase of requirements analysis. Any ontology can be described via the data modeling technique for a specific area of interest. If the information system being designed is based on a database, then the conceptual data model will later be mapped on to a logical data model, which in turn will be mapped on to a physical model during the physical design process. (Sometimes both of these phases are referred to as “physical design.”)
Object oriented programming has been used since the 1960s in the realm of writing programs. In the very beginning, programs were organized based on what they did – data was only attached if necessary. Programmers working in this area would organize their work based on the terms that the data’s objects described. For real time systems, this was a major breakthrough. In the 1980s, it broke in to the mainstream data processing scene. It was during this time that graphic user interfaces were able to introduce object-oriented programmers to commercial applications. The problem, they realized, of defining requirements would enormously benefit from an insight in to the realm of objects. The idea of object models was introduced – without the acknolwedgment of the fact that systems analysts had already discovered similar models.
In the ‘80s, a significantly new approach to data modeling was engineered by G.M. Nijssen. Deemed NIAM, short for “Nijssen’s Information Analysis Methodology,” it has since been re-named ORM, or “object role modeling.” The purpose is to show representations of relationships instead of showing types of entities as relational table analogs. With a focus on the use of language in making data modeling more accessible to a wider audience, ORM has a much higher potential for describing business regulations as well as constraints.
In recent times, agile methodologies have arisen to the forefront of data modeling. Central to these methodologies is the concept of evolutionary design. From this standpoint, when confronted with a system’s requirements, you acknowledge that you cannot fix them all up front. It is not practical to have a detailed designing phase at the very beginning of a project. Instead, the idea is that the system’s design must evolve throughout the software’s numerous iterations.
People tend to learn things by experimenting – trying new things out. Evolutionary design recognizes this key component of human nature. Under this concept, developers are expected to experiment with ways of implementing a particular feature. It may take them several tries before they arrive at a settled method. It is the same for database design.
Some experts claim that it is virtually impossible to manage multiple databases. But others claim that a database manager can easily look over a hundred database instances with relative ease, given the possession of the right tools.
While experimentation is important, it is also important to bring the different approaches one has tried out back together into an integrated whole every once in a while. For this, you need a shared master database, one that all work flows out of. When beginning a task, be sure to copy the master into your own workspace.
Then you can manipulate it and enter the changes back into the master copy. Be sure to integrate at least once a day.
Integrating constantly is key. It is a lot easier to perform smaller integrations frequently as opposed to occasionally performing larger integrations. The difficulties of integration tend to increase exponentially with each integration’s size. So in other words, doing a bunch of small changes is a lot easier in practice, even though it might not seem to make a lot of sense. The Software Configuration Management community has taken note of similarities to this practice when dealing with source code.