Consider these cultural impediments. Guide your actions with the view that architecture is a tool that enables and is integral to systems engineering. The following are best practices and lessons learned for making architectures work in your program. Purpose is paramount. Determine the purpose for the architecting effort, views, and models needed. Plan the architecting steps to generate the views and models to meet the purpose only.
Ultimately models and views should help each stakeholder reason about the structure and behavior of the system or part of the system they represent so they can conclude that their objectives will be met. Frameworks help by establishing minimum guidelines for each stakeholder's interest. However, stakeholders can have other concerns, so use the framework requirements as discussion to help uncover as many concerns as possible.
A plan is a point of departure. There should be clear milestone development dates, and the needed resources should be established for the development of the architecture views and models. Some views are precursors for others. Ensure that it is understood which views are "feeds" for others. Know the relationships. Models and views that relate to each other should be consistent, concordant, and developed with reuse in mind.
It is good practice to identify the data or information that each view shares, and manage it centrally to help create the different views. Be the early bird. Inject the idea of architectures early in the process.
Continuously influence your project to use models and views throughout execution. The earlier the better. No one trusts a skinny cook. By using models as an analysis tool yourself, particularly in day-to-day and key discussions, you maintain focus on key architectural issues and demonstrate how architecture artifacts can be used to enable decision making. Which way is right and how do I get there from here? Architectures can be used to help assess today's alternatives and different evolutionary paths to the future.
Views of architecture alternatives can be used to help judge the strengths and weaknesses of different approaches.
Views of "as is" and "to be" architectures help stakeholders understand potential migration paths and transitions. Try before you buy. Architectures or parts of them can sometimes be "tried out" during live exercises.
This can either confirm an architectural approach for application to real-world situations or be the basis for refinement that better aligns the architecture with operational reality. Architectures also can be used as a basis for identifying prototyping and experimentation activities to reduce technical risk and engagements with operational users to better illuminate their needs and operational concepts.
Taming the complexity beast. If a program or an effort is particularly large, models and views can provide a disciplined way of communicating how you expect the system to behave.
Some behavioral models such as business process models, activity models, and sequence diagrams are intuitive, easy to use, and easy to change to capture consensus views of system behavior. Keep it simple. Avoid diagrams that are complicated and non-intuitive, such as node connectivity diagrams with many nodes and edges, especially in the early phases of a program. This subset can be described by an abstract model called a viewpoint, such as an air flight versus an air space model.
This description of the view is documented in a partially specialized language, such as "pilot-speak" versus "controller-speak". Tools are used to assist the stakeholders, and they interface with each other in terms of the language derived from the viewpoint "pilot-speak" versus' "controller-speak". When stakeholders use common tools, such as the radio contact between pilot and controller, a common language is essential.
Now let us map this example to the Information Systems Architecture. Consider two stakeholders in a new small computing system: the users and the developers. The users of the system have a view of the system, and the developers of the system have a different view.
Neither view represents the whole system, because each perspective reduces how each sees the system. The view of the developer is one of productivity and tools, and doesn't include things such as actual live data and connections with consumers.
In this example, one viewpoint is the description of how the user sees the system, and the other viewpoint is how the developer sees the system. Users describe the system from their perspective, using a model of availability, response time, and access to information.
All users of the system use this model, and the model has a specific language. Developers describe the system differently than users, using a model of software connected to hardware distributed over a network, etc. However, there are many types of developers database, security, etc.
Tools exist for both users and developers. Tools such as online help are there specifically for users, and attempt to use the language of the user. Many different tools exist for different types of developers, but they suffer from the lack of a common language that is required to bring the system together.
It is difficult, if not impossible, in the current state of the tools market to have one tool interoperate with another tool. This section attempts to deal with views in a structured manner, but this is by no means a complete treatise on views. These concepts can be summarized as:.
In the following subsections TOGAF presents some recommended views, some or all of which may be appropriate in a particular architecture development. This is not intended as an exhaustive set of views, but simply as a starting point. Those described may be supplemented by additional views as required. These TOGAF subsections on views should be considered as guides for the development and treatment of a view, not as a full definition of a view. Each subsection describes the stakeholders related to the view, their concerns, and the entities modeled and the language used to depict the view the viewpoint.
The viewpoint provides architecture concepts from the different perspectives, including components, interfaces, and allocation of services critical to the view. The viewpoint language, analytical methods, and modeling methods associated with views are typically applied with the use of appropriate tools.
This view should be developed for the users. It focuses on the functional aspects of the system from the perspective of the users of the system.
Business scenarios see Business Scenarios are an important technique that may be used prior to, and as a key input to, the development of the Business Architecture view, to help identify and understand business needs, and thereby to derive the business requirements and constraints that the architecture development has to address.
Business scenarios are an extremely useful way to depict what should happen when planned and unplanned events occur. It is highly recommended that business scenarios be created for planned change, and for unplanned change. The following paragraphs describe some of the key issues that the architect might consider when constructing business scenarios. The Business Architecture view considers the functional aspects of the system; that is, what the new system is intended to do.
This can be built up from an analysis of the existing environment and of the requirements and constraints affecting the new system. The Business Architecture view considers the usability aspects of the system and its environment. It should also consider impacts on the user such as skill levels required, the need for specialized training, and migration from current practice. When considering usability the architect should take into account:.
Note that, although security and management are thought about here, it is from a usability and functionality point of view. The technical aspects of security and management are considered in the enterprise security view see Developing an Enterprise Security View and the enterprise manageability view see Developing an Enterprise Manageability View.
This view should be developed for security engineers of the system. It focuses on how the system is implemented from the perspective of security, and how security affects the system properties. It examines the system to establish what information is stored and processed, how valuable it is, what threats exist, and how they can be addressed.
Major concerns for this view are understanding how to ensure that the system is available to only those that have permission, and how to protect the system from unauthorized tampering. The subjects of the general architecture of a "security system" are components that are secured, or components that provide security services.
This section presents basic concepts required for an understanding of information system security. The essence of security is the controlled use of information. The purpose of this section is to provide a brief overview of how security protection is implemented in the components of an information system. Doctrinal or procedural mechanisms, such as physical and personnel security procedures and policy, are not discussed here in any depth.
An LSE may be either fixed or mobile. The LSEs by definition are under the control of the using organization. In an open system distributed computing implementation, secure and non-secure LSEs will almost certainly be required to interoperate. The concept of an information domain provides the basis for discussing security protection requirements.
An information domain is defined as a set of users, their information objects, and a security policy. An information domain security policy is the statement of the criteria for membership in the information domain and the required protection of the information objects.
Breaking an organization's information down into domains is the first step in reducing the task of security policy development to a manageable size. The business of most organizations requires that their members operate in more than one information domain.
The diversity of business activities and the variation in perception of threats to the security of information will result in the existence of different information domains within one organization security policy. A specific activity may use several information domains, each with its own distinct information domain security policy.
Information domains are not necessarily bounded by information systems or even networks of systems. The security mechanisms implemented in information system components may be evaluated for their ability to meet the information domain security policies. Information domains can be viewed as being strictly isolated from one another.
Information objects should be transferred between two information domains only in accordance with established rules, conditions, and procedures expressed in the security policy of each information domain. The concept of "absolute protection" is used to achieve the same level of protection in all information systems supporting a particular information domain. It draws attention to the problems created by interconnecting LSEs that provide different strengths of security protection.
This interconnection is likely because open systems may consist of an unknown number of heterogeneous LSEs. Analysis of minimum security requirements will ensure that the concept of absolute protection will be achieved for each information domain across LSEs. Generic Security Architecture View shows a generic architecture view which can be used to discuss the allocation of security services and the implementation of security mechanisms. This view identifies the architectural components within an LSE.
The end system and the relay system are viewed as requiring the same types of security protection. For this reason, a discussion of security protection in an end system generally also applies to a relay system. The security protections in an end system could occur in both the hardware and software.
Security protection of an information system is provided by mechanisms implemented in the hardware and software of the system and by the use of doctrinal mechanisms.
The mechanisms implemented in the system hardware and software are concentrated in the end system or relay system. This focus for security protection is based on the open system, distributed computing approach for information systems. This implies use of commercial common carriers and private common-user communications systems as the CN provider between LSEs. Thus, for operation of end systems in a distributed environment, a greater degree of security protection can be assured from implementation of mechanisms in the end system or relay system.
However, communications networks should satisfy the availability element of security in order to provide appropriate security protection for the information system. This means that CNs must provide an agreed level of responsiveness, continuity of service, and resistance to accidental and intentional threats to the communications service availability. Implementing the necessary security protection in the end system occurs in three system service areas of TOGAF.
They are operating system services, network services, and system management services. Most of the implementation of security protection is expected to occur in software. The hardware is expected to protect the integrity of the end-system software. Hardware security mechanisms include protection against tampering, undesired emanations, and cryptography. A "security context" is defined as a controlled process space subject to an information domain security policy.
The security context is therefore analogous to a common operating system notion of user process space. Isolation of security contexts is required. Security contexts are required for all applications e. The focus is on strict isolation of information domains, management of end-system resources, and controlled sharing and transfer of information among information domains.
Where possible, security-critical functions should be isolated into relatively small modules that are related in well-defined ways. The operating system will isolate multiple security contexts from each other using hardware protection features e. Untrusted software will use end-system resources only by invoking security-critical functions through the separation kernel. Most of the security-critical functions are the low-level functions of traditional operating systems.
Two basic classes of communications are envisioned for which distributed security contexts may need to be established. These are interactive and staged store and forward communications. The concept of a "security association" forms an interactive distributed security context. A security association is defined as all the communication and security mechanisms and functions that extend the protections required by an information domain security policy within an end system to information in transfer between multiple end systems.
The security association is an extension or expansion of an OSI application layer association. An application layer association is composed of appropriate application layer functions and protocols plus all of the underlying communications functions and protocols at other layers of the OSI model. Multiple security protocols may be included in a single security association to provide for a combination of security services. For staged delivery communications e.
The wrapped security attributes are intended to permit the receiving end system to establish the necessary security context for processing the transferred data. If the wrapping process cannot provide all the necessary security protection, interactive security contexts between end systems will have to be used to ensure the secure staged transfer of information.
Security management is a particular instance of the general information system management functions discussed in earlier chapters. Information system security management services are concerned with the installation, maintenance, and enforcement of information domain and information system security policy rules in the information system intended to provide these security services. In particular, the security management function controls information needed by operating system services within the end system security architecture.
In addition to these core services, security management requires event handling, auditing, and recovery. Standardization of security management functions, data structures, and protocols will enable interoperation of Security Management Application Processes SMAPs across many platforms in support of distributed security management.
Building a software-intensive system is both expensive and time-consuming. Because of this, it is necessary to establish guidelines to help minimize the effort required and the risks involved. This is the purpose of the software engineering view, which should be developed for the software engineers who are going to develop the system. There are many lifecycle models defined for software development waterfall, prototyping, etc.
A consideration for the architect is how best to feed architectural decisions into the lifecycle model that is going to be used for development of the system. As a piece of software grows in size, so the complexity and inter-dependencies between different parts of the code increase.
Reliability will fall dramatically unless this complexity can be brought under control. Modularity is a concept by which a piece of software is grouped into a number of distinct and logically cohesive sub-units, presenting services to the outside world through a well-defined interface. Generally speaking, the components of a module will share access to common data, and the interface will provide controlled access to this data.
Using modularity, it becomes possible to build a software application incrementally on a reliable base of pre-tested code. A further benefit of a well-defined modular system is that the modules defined within it may be re-used in the same or on other projects, cutting development time dramatically by reducing both development and testing effort.
In recent years, the development of object-oriented programming languages has greatly increased programming language support for module development and code re-use. Such languages allow the developer to define "classes" a unit of modularity of objects that behave in a controlled and well-defined manner. Techniques such as inheritance - which enables parts of an existing interface to an object to be changed - enhance the potential for re-usability by allowing predefined classes to be tailored or extended when the services they offer do not quite meet the requirement of the developer.
If modularity and software re-use are likely to be key objectives of new software developments, consideration must be given to whether the component parts of any proposed architecture may facilitate or prohibit the desired level of modularity in the appropriate areas. Software portability - the ability to take a piece of software written in one environment and make it run in another - is important in many projects, especially product developments.
It requires that all software and hardware aspects of a chosen Technology Architecture not just the newly developed application be available on the new platform. It will, therefore, be necessary to ensure that the component parts of any chosen architecture are available across all the appropriate target platforms.
Interoperability is always required between the component parts of a new architecture. It may also, however, be required between a new architecture and parts of an existing legacy system; for example, during the staggered replacement of an old system. Interoperability between the new and old architectures may, therefore, be a factor in architectural choice. This view considers two general categories of software systems.
First, there are those systems that require only a user interface to a database, requiring little or no business logic built into the software. These systems can be called data-intensive. Second, there are those systems that require users to manipulate information that might be distributed across multiple databases, and to do this manipulation according to a predefined business logic.
These systems can be called information-intensive. Data-intensive systems can be built with reasonable ease through the use of 4GL tools.
In these systems, the business logic is in the mind of the user; i. Information-intensive systems are different. Information is defined as "meaningful data"; i. Information is different from data. Data is the tokens that are stored in databases or other data stores. Information is multiple tokens of data combined to convey a message. For example, "3" is data, but "3 widgets" is information. Typically, information reflects a model. Information-intensive systems also tend to require information from other systems and, if this path of information passing is automated, usually some mediation is required to convert the format of incoming information into a format that can be locally used.
Because of this, information-intensive systems tend to be more complex than others, and require the most effort to build, integrate, and maintain. This view is concerned primarily with information-intensive systems. In addition to building systems that can manage information, though, systems should also be as flexible as possible.
This has a number of benefits. It allows the system to be used in different environments; for example, the same system should be usable with different sources of data, even if the new data store is a different configuration. Similarly, it might make sense to use the same functionality but with users who need a different user interface. So information systems should be built so that they can be reconfigured with different data stores or different user interfaces.
If a system is built to allow this, it enables the enterprise to re-use parts or components of one system in another. The word "interoperate" implies that one processing system performs an operation on behalf of or at the behest of another processing system. In practice, the request is a complete sentence containing a verb operation and one or more nouns identities of resources, where the resources can be information, data, physical devices, etc.
Interoperability comes from shared functionality. Interoperability can only be achieved when information is passed, not when data is passed. Most information systems today get information both from their own data stores and other information systems.
In some cases the web of connectivity between information systems is quite extensive. This means that the required data is available Anytime, Anywhere, by Anyone, who is Authorized, in Any way. This requires that many information systems are architecturally linked and provide information to each other. There must be some kind of physical connectivity between the systems.
This enables the transfer of bits. When the bits are assembled at the receiving system, they must be placed in the context that the receiving system needs. In other words, both the source and destination systems must agree on an information model.
The source system uses this model to convert its information into data to be passed, and the destination system uses this same model to convert the received data into information it can use. This usually requires an agreement between the architects and designers of the two systems. The ICD defines the exact syntax and semantics that the sending system will use so that the receiving system will know what to do when the data arrives. The biggest problem with ICDs is that they tend to be unique solutions between two systems.
If a given system must share information with n other systems, there is the potential need for n 2 ICDs. This extremely tight integration prohibits flexibility and the ability of a system to adapt to a changing environment. Maintaining all these ICDs is also a challenge.
Use of new technologies such as XML, once they become reliable and well documented, might eliminate the need for an ICD. It should also ease the pain of maintaining all the interfaces. Another approach is to build "mediators" between the systems.
Mediators would use metadata that is sent with the data to understand the syntax and semantics of the data and convert it into a format usable by the receiving system. However, mediators do require that well-formed metadata be sent, adding to the complexity of the interface. Typically, software architectures are either two-tier or three-tier.
In a two-tier architecture, the user interface and business logic are tightly coupled while the data is kept independent. This gives the advantage of allowing the data to reside on a dedicated data server.
It also allows the data to be independently maintained. The tight coupling of the user interface and business logic assure that they will work well together, for this problem in this domain. However, the tight coupling of the user interface and business logic dramatically increases maintainability risks while reducing flexibility and opportunities for re-use. A three-tier approach adds a tier that separates the business logic from the user interface. This in principle allows the business logic to be used with different user interfaces as well as with different data stores.
To achieve maximum flexibility, software should utilize a five-tier scheme for software which extends the three-tier paradigm see The Five-Tier Organization. The scheme is intended to provide strong separation of the three major functional areas of the architecture.
Since there are client and server aspects of both the user interface and the data store, the scheme then has five tiers. The presentation tier is typically COTS-based. In Architecture , nonfunctional decisions are cast and separated by the functional requirements.
In Design, functional requirements are accomplished. Architecture serves as a blueprint for a system. It provides an abstraction to manage the system complexity and establish a communication and coordination mechanism among components. It defines a structured solution to meet all the technical and operational requirements, while optimizing the common quality attributes like performance and security.
Further, it involves a set of significant decisions about the organization related to software development and each of these decisions can have a considerable impact on quality, maintainability, performance, and the overall success of the final product.
Software design provides a design plan that describes the elements of a system, how they fit, and work together to fulfill the requirement of the system. To negotiate system requirements, and to set expectations with customers, marketing, and management personnel. It comes before the detailed design, coding, integration, and testing and after the domain analysis, requirements analysis, and risk analysis. The primary goal of the architecture is to identify requirements that affect the structure of the application.
A well-laid architecture reduces the business risks associated with building a technical solution and builds a bridge between business and technical requirements. Software architecture is still an emerging discipline within software engineering. Lack of analysis methods to predict whether architecture will result in an implementation that meets the requirements.
Lack of understanding of the role of software architect and poor communication among stakeholders. A Software Architect provides a solution that the technical team can create and design for the entire application. Expert in software design, including diverse methods and approaches such as object-oriented design, event-driven design, etc.
Lead the development team and coordinate the development efforts for the integrity of the design.
0コメント