GS BUSA 415 CH 8 Architecture Design

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Three-tiered architecture The application logic may consist of one or more separate modules running on a workstation or application server. Finally, a relational DBMS running on a database server contains the data access logic and data storage.

expands on the two-tier system with the addition of an application server that contains the software applications and the business rules. In this case, the software on the client computer is responsible for presentation logic, an application server(s) is responsible for the application logic, and a separate database server(s) is responsible for the data access logic and data storage.

The encryption and authentication requirements state

what encryption and authentication requirements are needed for what data. For example, will sensitive data such as customer credit card numbers be stored in the database in encrypted form, or will encryption be used to take orders over the Internet from the company's website? Will users be required to use a digital certificate in addition to a standard password?

All software systems can be divided into four basic functions. These four functions (data storage, data access logic, application logic, and presentation logic) are the basic building blocks of any information system.

1. Data Storage Most information systems require data to be stored and retrieved. These are the data entities documented in Entity Relationship Diagram ERDs. 2. Data Access Logic The processing required to access data, often meaning database queries in Structured Query Language (SQL) 3. Application Logic The logic documented in the DFDs, use cases, and functional requirements. 4. Presentation Logic The display of information to the user and the acceptance of the user's commands (the user interface).

2 of 4 : Creating an Architecture Design : Performance requirements Focus on performance issues such as response time, capacity, and reliability. As analysts define the performance requirements for the system, the test-ability of those requirements is a key issue. ○ Each requirement must be measurable so that a benchmark comparison can be made. Only in that way can the achievement of a performance requirement be verified. Three key performance requirement areas 1. Speed requirements are exactly what they say: how fast the system must operate. First and foremost, this is the response time of the system: How long does it take the system to respond to a user request? The second aspect of speed requirements is how long it takes transactions in one part of the system to be reflected in other parts. For example, how soon after an order is placed will the items it contains be shown as no longer available for sale to someone else?

2. Capacity requirements attempt to predict how many users the system will have to support, both in total and simultaneously. Capacity requirements are important in understanding the size of the databases, the processing power needed, and so on. The most important requirement is usually the peak number of simultaneous users, because this has a direct impact on the processing power of the computer(s) needed to support the system. 3. Availability and reliability requirements focus on the extent to which users can assume that the system will be available for them to use. While some systems are intended to be used just during the 40-hour work week, other systems are designed to be used by people around the world. For such systems, project team members need to consider how the application can be operated, supported, and maintained 24/7 (i.e., 24 hours a day, 7 days a week). A system that requires high reliability (e.g., a medical device or an e-commerce order processing system during the Christmas shopping season) needs far greater planning and testing than one that does not have such high reliability needs (e.g., personnel system, Web catalog).

4 of 4 : Creating an Architecture Design : Cultural and political requirements Cultural and political requirements are specific to the countries in which the system will be used. In today's global business environment, organizations are expanding their systems to reach users around the world. There are four areas 1. Multilingual Requirements The first and most obvious difference between applications used in one region and those designed for global use is language. Global applications often have multilingual requirements, which means that they have to support users who speak different languages and write with non-English letters (e.g., those with accents, Cyrillic, Japanese). Some systems are designed to handle multiple languages on the fly so that users in different countries can use different languages concurrently; that is, the same system supports several different languages simultaneously (a concurrent multilingual system). Other systems contain separate parts that are written in each language and must be reinstalled before a specific language can be used; that is, each language is provided by a different version of the system so that any one installation will use only one language (i.e., a discrete multilingual system).

2. Customization Requirements For global applications, the project team will need to give some thought to customization requirements: how much of the application will be controlled by a central group and how much of the application will be managed locally? 3. Unstated Norms Many countries have unstated norms that are not shared internationally. It is important for the application designer to make these assumptions explicit, because they can lead to confusion otherwise. In the United States, the unstated norm for entering a date is the date format MM/DD/YYYY; however, in Canada and most European countries, the unstated norm is DD/MM/YYYY. 4. Legal Requirements Legal requirements are imposed by laws and government regulations. System developers sometimes forget to think about legal regulations, but unfortunately, ignorance of the law is no defense.

Proponents of cloud computing point to a number of advantages of the cloud computing model. 1. Resources allocated can be increased or decreased based upon demand. This capability, termed elasticity, makes the cloud scalable—the cloud can scale up for periods of peak demand and scale down for times of less demand. 2. Cloud customers can obtain cloud resources in a straightforward fashion. Arrangements are made with the cloud service provider for a certain amount of computing, storage, software, process, or other resources. After using these resources, they can be released if no longer required.

3. Cloud services typically have standardized application program interfaces (APIs). This means that the services have standardized the way that programs or data sources communicate with each other. This capability lets the customer more easily create linkages between cloud services. 4. The cloud computing model enables customers to be billed for resources as they are used. Usage of the cloud is measured and customers pay only for resources used—much like your use of electricity in your apartment. This feature makes cloud computing very attractive from a financial perspective. Organizations should be prepared to carefully structure their cloud computing arrangements and include redundancy in their applications so that the negative consequences of a catastrophic failure are minimized.

1 of 4 : Creating an Architecture Design : Operational Requirements Operational requirements specify the operating environment(s) in which the system must perform and how those may change over time. This usually refers to operating systems, system software, and information systems with which the system must interact, but will on occasion also include the physical environment if the environment is important to the application. 4 key operational requirement areas and provides some examples of each. 1. Technical environment requirements specify the type of hardware and software on which the system will work. These requirements usually focus on the operating system software (e.g., Windows, Linux), database system software (e.g., Oracle), and other system software (e.g., Internet Explorer). 2. System integration requirements are those that require the system to operate with other information systems, either inside or outside the company. These typically specify interfaces through which data will be exchanged with other systems.

3. Portability requirements define how the technical operating environments may evolve over time and how the system must respond (e.g., the system may currently run on Windows 7, but in the future may have to run on Windows 10 and Linux). Portability requirements also refer to potential changes in business requirements that will drive technical-environment changes. 4. Maintainability requirements specify the business requirement changes that can be anticipated. Not all changes are predictable, but some are. The maintainability requirement attempts to anticipate future requirements so that the systems designed today will be easy to maintain if and when those future requirements appear. Maintainability requirements may also define the update cycle for the system, such as the frequency with which new versions will be released.

mission critical system

An information system that is literally critical to the survival of the organization. It is an application that cannot be permitted to fail, and if it does fail, the operations staff drops everything else to fix it. Mission critical applications are usually clearly identified so that their importance is not overlooked.

Creating an Architecture Design The architecture design specifies the overall architecture and the placement of software and hardware that will be used.

Creating an architecture design begins with the nonfunctional requirements. 1. Refine the nonfunctional requirements into more detailed requirements that are then employed to help select the architecture to be used and the software components to be placed on each device. 2. Then the nonfunctional requirements and the architecture design are used to develop the hardware and software specification There are four primary types of nonfunctional requirements that can be important in designing the architecture: 1. operational requirements, 2. performance requirements, 3. security requirements, and 4. cultural and political requirements.

Designing the Architecture In many cases, the technical environment requirements as driven by the business requirements may simply define the application architecture.

In the event that the technical environment requirements do not require the choice of a specific architecture, then the other nonfunctional requirements become important. Figure 8‑9 summarizes the relationship between requirements and recommended architectures.

Comparing Architecture Options

Most systems are built to use the existing infrastructure in the organization, so often the current infrastructure restricts the choice of architecture. ○ Many organizations now have a variety of infrastructures in use, however, or are actively looking for pilot projects to test new architectures and infrastructures, enabling the project team to select an architecture on the basis of other important factors. ○ Project teams often underestimate the difficulty associated with creating secure, efficient client- server applications.

Native Applications written to run on a specific device with a specific operating system. Applications written in a device's native code (e.g., Objective C for Apple devices; Java for Android devices) provide the richest user experience and can take full advantage of the device's resources.

Native applications must be re-built for each native operating system, however, and developers with skills in multiple languages are scarce. In addition, the apps must be updated as new device models and operating system versions are released.

Architecture Design

Plans for how the information system components will be distributed across multiple computers and what hardware, operating system software, and application software will be used on each computer

Mobile Application Architecture Similar to our previous discussion, the system architect must decide how much of the presentation logic, business logic, and data access logic will reside on the mobile device, and how much will reside on server devices.

Rich client - involves processing on the mobile device using its resources. Presentation logic, business logic, and data access logic on the client side. Only occasionally connected to the server. Thin Web-based client - business and data access logic on the server side; always connected to server. Rich Internet application - browser-based; uses some technologies on client device to provide a rich user interface (e.g., Flash). If the application requires a rich user interface (UI), only limited access to local device resources, and must be portable to other platforms

3 of 4 : Creating an Architecture Design : Security requirements Security is the ability to protect the information system from disruption and data loss, whether caused by an intentional act (e.g., a hacker or a terrorist attack) or a random event (e.g., disk failure, tornado). Security is primarily the responsibility of the operations group—the staff responsible for installing and operating security controls such as firewalls, intrusion detection systems, and routine backup and recovery operations. Nonetheless, developers of new systems must ensure that the system's security requirements produce reasonable precautions to prevent problems; system developers are responsible for ensuring security within the information systems themselves.

System developers must know the security standards that are applicable to their organizations. Some industries are subject to various restrictions imposed by law. • Sarbanes-Oxley Act is well known for the responsibilities it places on publicly traded companies to protect their financial systems from fraud and abuse. • Health Insurance Portability and Accountability Act (HIPAA) applies to health-care providers, health insurers, and health information data processors • Graham-Leach-Bliley (GLB) Act There are also voluntary security standards that companies may choose to benchmark against, such as • International Standards Organization (ISO). ISO 17799 is a generally accepted standard for information security • Payment Card Industry Data Security Standards (PCI DSS), which were established by the major credit card companies to ensure the privacy of stored customer information

Hardware and Software Specification Document The design phase is also the time to begin selecting and acquiring the hardware and software that will be needed for the future system. • Operating System ○ Special Software • Hardware • Network 1. Operating System Here, you should consider any additional costs such as technical training, maintenance, extended warranties, and licensing agreements (e.g., a site license for a software package). 2. Hardware In general, the list can include such things as database servers, network servers, peripheral devices (e.g., printers, scanners), backup devices, storage components, and any other hardware component needed to support an application. At this time, you also should note the quantity of each item that will be needed. ○ you need to describe, in as much detail as possible, the minimum requirements for each piece of hardware. • consider the hardware standards within the organization or those recommended by vendors. • Talk with experienced system developers or other companies with similar systems. • Finally, think about the factors that affect hardware performance, such as the response-time expectations of the users, data volumes, software memory requirements, the number of users accessing the system, the number of external connections, and growth projections. 3.

The hardware and software specification is a document that describes what hardware and software are needed to support the application. There are several steps involved in creating the document. Figure 8‑10 shows a sample hardware and software specification. After preparing the hardware and software specification, the project team works with the purchasing department to acquire the hardware and software. The project team prepares an RFP based on legal and organizational policies provided by the purchasing department, which then issues the RFP. ○ The project team then selects the most desirable vendor for the hardware and software on the basis of the proposals received, perhaps using a weighted alternative matrix.

Server-Based Architecture In this section we discuss the server-based architecture, a less common, but still important architecture choice. This very simple architecture often works very well. • Application software is developed and stored on the server, and all data are on the same computer. • There is one point of control because all messages flow through the one central server. • Software development and software administration are simplified because a single computer hosts the entire system (operating system and application software). • Server-based system architectures are common today in situations where systems process very high transaction volumes and strong security is required.

The very first computing architectures were server-based, with the server (usually, a central mainframe computer) performing all four application functions. The clients (in those days, "dumb" terminals) enabled users to send and receive messages to and from the server computer. The clients merely captured keystrokes and sent them to the server for processing, and accepted instructions from the server on what to display (Figure 8‑4). ○ The server-based architecture was the first architecture used in information systems, but did not remain the only option as hardware and software evolved.

Mobile Web apps Platform independent and can be deployed on any device equipped with the Web browser. Mobile Web apps are built with Web technology such as HTML5.

These applications cannot use the hardware and software on the device, require an Internet connection to work, and will provide the most generic user experience.

Anyone can post a public key on the Internet, so there is no way of knowing for sure who they actually are.

This is where the Internet's public key infrastructure (PKI) becomes important.6 The PKI is a set of hardware, software, organizations, and policies designed to make public key encryption work on the Internet. PKI begins with a certificate authority (CA), which is a trusted organization that can vouch for the authenticity of the person or organization using authentication (e.g., VeriSign).

Virtualization • Server virtualization • Storage virtualization • Network virtualization

This term, in the computing domain, refers to the creation of a virtual device or resource, such as a server or storage device. You may be familiar with this concept if you have partitioned your computer's hard drive into more than one separate hard drive. While you only have one physical hard drive in your system, you treat each partitioned, "virtual" drive as if it is a distinct physical hard drive.

Blockchain

Used to secure the transactions taking place between users in the same network. Its primary goal is to certify transactions between parties. With blockchain, a public register stores transactions between two users belonging to the same network in a secure, permanent, and verifiable way.

Two-tiered architecture

Uses only two sets of computers, one set of clients and one set of servers. The server is responsible for the data and the client is responsible for the application and presentation

Cross-platform Frameworks exist that enable developers to create a mobile app in Web-based technologies like JavaScript or HTML and use the framework to adapt the application for multiple devices.

Usually the app must be "tweaked" for each device on which it will be deployed. Mobile applications developed with this approach will not provide the same rich user experience as the native apps, so it is best to use this approach with informational applications that do not require heavy use of device functions.

Public key encryption also permits authentication (or digital signatures).

When one user sends a message to another, it is difficult to legally prove who actually sent the message. Legal proof is important in many communications, such as bank transfers and buy/sell orders in currency and stock trading, which normally require legal signatures. ○ Public key encryption algorithms are invertible, meaning that text encrypted with either key can be decrypted by the other. Normally, we encrypt with the public key and decrypt with the private key. However, it is possible to do the inverse: encrypt with the private key and decrypt with the public key.

Storage Area Network (SAN)

a high-speed network with the sole purpose of providing storage to other attached servers uses storage virtualization to create a high-speed subnetwork of shared storage devices. In this environment, tasks such as backup, archiving, and recovery are easier and faster.

Client-Server Architectures Most organizations today are utilizing client-server architectures, which attempt to

balance the processing between client devices and one or more server devices. The client is responsible for the presentation logic, whereas the server is responsible for the data access logic and data storage. The application logic may reside on the client, reside on the server, or be split between both (Figure 8‑1). If the client shown in Figure 8‑1 contained all or most of the application logic, it is called a thick or fat client. Currently, thin clients, containing just a small portion of the application logic, are popular because of lower overhead and easier maintenance.

An important component of the design phase is the architecture design, which

describes the system's hardware, software, and network environment. The architecture design flows primarily from the nonfunctional requirements, such as: • operational, • performance, • security, • cultural, and • political requirements. The deliverables from architecture design include: • the architecture design and • the hardware and software specification.

n-tiered architecture The primary advantage of an n-tiered client-server architecture compared with a two-tiered architecture (or a three-tiered with a two-tiered) is that it separates out the processing that occurs to better balance the load on the different servers; it is more scalable. There are two primary disadvantages to an n-tiered architecture compared with a two-tiered architecture (or a three-tiered with a two-tiered). • First, the configuration puts a greater load on the network. • Second, it is much more difficult to program and test software in n-tiered architectures than in two-tiered architectures, because more devices have to communicate properly to complete a user's transaction.

distributes the work of the application (the middle tier) among multiple layers of more specialized server computers This type of architecture is common in today's Web-based e-commerce systems (Figure 8‑3). The browser software on client computers makes HTTP requests to view pages from the Web server(s), and the Web server(s) enable the user to view merchandise for sale by responding with HTML documents.

Architectural Components

hardware and software of a system • The major software components of the system being developed have to be identified and then • Allocated to the various hardware components on which the system will operate.

Elements of an Architecture Design The objective of architecture design is to determine

how the software components of the information system will be assigned to the hardware devices of the system.

Security is an ever-increasing problem in today's Internet-enabled world. Historically, the greatest security threat has come from

inside the organization itself. Ever since the early 1980s when the FBI first began keeping computer crime statistics and security firms began conducting surveys of computer crime, organizational employees have perpetrated the majority of computer crimes.

Storage virtualization

involves combining multiple network storage devices into what appears to be a single storage unit

Server virtualization Today, a physical server device can be used to provide many virtual servers that are independent of each other, but co-reside on the same physical server. Each virtual server runs an operating system and can be rebooted independently of the other virtual servers. Less hardware is required to provide a set of virtual servers as compared to equivalent physical servers, so costs are reduced. This arrangement can also optimize the utilization of the physical server, saving on operational costs.

involves partitioning a physical server into smaller virtual servers. Software is used to divide the physical server into multiple virtual environments, called virtual or private servers. This capability overcomes the primary limitation of the older style server-based architectures that were based on single, large, expensive, monolithic computers.

Zero client, or ultra thin client Zero client computing has a number of benefits. • Power usage can be significantly reduced compared to fat client configurations. • The devices used are much less expensive than PCs or even thin client devices. • Since there is no software at the client device, there is no vulnerability to malware. • Administration is easy and multiple virtual PCs can be run on server class hardware in VDI environments, significantly reducing the number of physical PCs that must be acquired and maintained. • In addition, the server-based zero-client model limits the non-business use of the client computer (e.g., no Facebook; no Farmville, etc.).

server-based computing model that is often used today in a virtual desktop infrastructure (VDI). A typical zero client device is a small box that connects a keyboard, mouse, monitor, and Ethernet connection to a remote server. The server hosts everything: the client's operating system and all software applications. The server can be accessed wirelessly or with cable.

Developing security requirements usually starts with 1. System Value Estimates This helps pinpoint extremely important systems so that the operations staff are aware of the risks. The most important computer asset in any organization is not the equipment; it is the organization's data. In some cases, the information system itself has value that far exceeds the cost of the equipment as well. For example, for an Internet bank that has no brick-and-mortar branches, the website is a mission critical system. 2. Access Control Requirements Security within systems usually focuses on specifying who can access what data, identifying the need for encryption and authentication, and ensuring that the application prevents the spread of viruses (Figure 8‑7). Access control requirements state who can access what data and what type of access is permitted—whether the individual can create, read, update, and/or delete the data. The requirements reduce the chance that an authorized user of the system can perform unauthorized actions.

some assessment of the value of the system and its data. 3. Encryption and Authentication Requirements One of the best ways to prevent unauthorized access to data is encryption, which is a means of disguising information by the use of mathematical algorithms (or formulas). Encryption can be used to protect data stored in databases or data that are in transit over a network from a database to a computer. There are two fundamentally different types of encryption • A symmetric encryption algorithm [such as Data Encryption Standard (DES) or Advanced Encryption Standard (AES)] is one in which the key used to encrypt a message is the same as the one used to decrypt it. • In an asymmetric encryption algorithm (such as public key encryption), the key used to encrypt data (called the public key) is different from the one used to decrypt it (called the private key). Even if everyone knows the public key, once the data are encrypted, they cannot be decrypted without the private key. Public key encryption greatly reduces the key management problem. 4. Virus Control Requirements The single most common security problem comes from viruses. Recent studies have shown that over 50% of organizations suffer a virus infection each year. Viruses cause unwanted events—some harmless (such as nuisance messages), some serious (such as the destruction of data). Any time a system permits data to be imported or uploaded from a user's computer, there is the potential for a virus infection.

The fundamental problem with early server-based systems was that

the server processed all the work in the system. As the demands for more and more applications and the number of users grew, server computers became overloaded and unable to quickly process all the users' demands. In the early days, upgrading to a larger server computer (usually a mainframe) required a substantial financial commitment. Increased capacity came only in large, expensive chunks.

The "cloud" in cloud computing can be defined as

the set of hardware, networks, storage, services, and interfaces that combine to deliver aspects of computing as a service. Cloud services include the delivery of software, infrastructure, and storage over the Internet (either as separate components or a complete platform) based on user demand.

Designing the system architecture can be quite difficult; therefore, many organizations use

the skills of experienced, expert system architects (consultants or employees) who specialize in the task. These specialists ensure that the new system is developed as a unified, coherent software system that satisfies the user and functional requirements and conforms to the organization's architectural standards and goals.

Client-server architectures also have some critical limitations, the most important of which is

their complexity. All applications in client-server computing have two parts—the software on the client side and the software on the server side. Writing this software is more complicated than writing the traditional all-in-one software used in server-based architectures (discussed in a later section). Updating the overall system with a new version of the software is more complicated, too. With client-server architectures, you must update all clients and all servers and you must ensure that the updates are applied on all devices.

Cloud Computing

wherein everything, from computing power to computing infrastructure, applications, business processes to personal collaboration—can be delivered as a service wherever and whenever needed

Objectives • Describe the fundamental components of an information system. • Describe client-server, server-based, and mobile application architectures.

• Describe how cloud computing can be incorporated as a system architecture component. • Explain how operational, performance, security, cultural, and political requirements affect the architecture design. • Create a hardware and software specification.

Client-server architectures have four important benefits. • First and foremost, they are scalable. That means it is easy to increase or decrease the storage and processing capabilities of the servers. The cost to upgrade is gradual, and you can upgrade in small increments. • Second, client-server architectures can support many different types of clients and servers. It is possible to connect computers that use different operating systems so that you are not locked into one vendor. ○ Middleware is a type of system software designed to translate between different vendors' software. Middleware is installed on both the client computer and the server computer. The client software communicates with the middleware, which can reformat the message into a standard language that can be understood by the middleware, which assists the server software.

• Third, for thin client-server architectures that use Internet standards, it is simple to clearly separate the presentation logic, the application logic, and the data access logic and design each to be somewhat independent. ○ For example, the presentation logic can be designed in HTML or XML to specify how the page will appear on the screen • Finally, if a server fails in a client-server architecture, only the applications requiring that server will fail. The failed server can be swapped out and replaced and the applications can then be restored.

The three primary hardware components of a system are

• client computers Client computers are the input-output devices employed by the user and include desktop or laptop computers, handheld devices, smartphones, tablet devices, special-purpose terminals, and so on • servers Larger multi-user computers used to store software and data that can be accessed by anyone who has permission • The network that connects them. The network that connects the computers can vary in speed from slow cell phones or modem connections that must be dialed, to medium- speed always-on frame relay networks, to fast always-on broadband connections such as cable modem, DSL, or T1 circuits, to high-speed always- on Ethernet, T3, or ATM circuits.

Client-Server Tiers There are many ways in which the application logic can be partitioned between the client and the server.

• two-tiered architecture • three-tiered architecture • n-tier architecture

Cloud computing can be implemented in three ways: • private cloud, • public cloud, and • hybrid clouds

○ Public clouds services are provided "as a service" over the Internet with little or no control over the underlying technology infrastructure ○ Private clouds offer activities and functions "as a service," but are deployed over a company intranet or hosted data center. ○ Hybrid clouds Combine the power of both public and private clouds. In this scenario, activities and tasks are allocated to private or public clouds as required.


संबंधित स्टडी सेट्स

Health Assessment Ch 14-18 Skin, hair, nails, ears, nose, throat, eyes

View Set

Chapter 31 Self-Assessment Questions

View Set

Capybara with Boots - Chapter 1-14 Quiz Questions (ALL Questions Included)

View Set

The art of public speaking chapter 9

View Set

Topic 1: Foundations of Gerontology and Theories of Aging (prep-u)

View Set

Superficial muscles origin, insertion, action, innervation

View Set