BIS 324

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Logical Design of Relational Database Systems

(1) The conversion of E-R diagrams into relational tables. (2) The data normalization technique. (3) The use of the data normalization technique to test the tables resulting from the E-R diagram conversions.

Data Administration

A planning and analysis function that is responsible for: -setting data policy and standards -promoting the company's data as a competitive resource -accounting for the use of data -providing liaison support to systems analysts during application development

Create/Drop Table in SQL

CREATE TABLE SALESPERSON (SPNUM CHAR(3) PRIMARY KEY, SPNAME CHAR(12) COMMPERCT DECIMAL(3,0) YEARHIRE CHAR(4) OFFNUM CHAR(3) ); DROP TABLE SALESPERSON;

Create/Drop View in SQL

CREATE VIEW EMPLOYEE AS SELECT SPNUM, SPNAME, YEARHIRE FROM SLAESPERSON; DROP VIEW EMPLOYEE ;

Tracks

Concentric circles on which data is stored, serially by bit. Numbered track 0, track 1, track 2, and so on.

Data Dictionary: Entities and Attributes

Data-Related Entities -Databases -Tables -Attributes -Web Pages Software-Related Entities -Application Programs -Database Management Systems -Jobs Hardware-Related Entities -Computers -Disks -Local-Area Networks Outputs -Reports -Queries People

Functional Dependence

Salesperson Number --> Salesperson Name Salesperson Number is the determinant. The value of Salesperson Number determines the value of Salesperson Name. Salesperson Name is functionally dependent on Salesperson Number.

Managing Externally Acquired Databases

Some databases are not designed by a company's own personnel but are acquired as part of purchased software packages. For example, Enterprise Resource Planning (ERP) software.

Converting Entities in Ternary Relationships

The primary key of the SALE table is the combination of the unique identifiers of the three entities involved, plus the Date attribute.

Documentation and Publicity

The data management function is responsible for documenting the data environment. Documentation includes: a description of the data and the databases, plus programs, reports, and which people have access to these items. The data management group should perform a publicity function, informing potential users of what data already exists in the database. May help employees discover how to automate more of their work.

Data Dictionary Operations

The database administration group is responsible for the operational aspects, as opposed to the planning aspects, of the data dictionary and any other metadata tools.

Cylinders

A collection of tracks, one from each recording surface, one directly above the other. Number of cylinders in a disk = number of tracks on any one of its recording surfaces. The collection of each surface's track 76, one above the other, seem to take the shape of a cylinder. This collection of tracks is called cylinder 76. Once we have established a cylinder, it is also necessary to number the tracks within the cylinder. Cylinder 76's tracks.

Relational DBMS Catalogs

A highly active but limited-scope data dictionary that is very closely tied in to the operations of the relational DBMS. Composed of relational tables and may be queried with standard SQL commands. Database entity data stored includes: -Databases -Tables -Attributes -Views -Indexes -Users -Disks Do not include such entities as reports and nonrelational files. The main purpose of the relational catalog is to accurately support the relational query optimizer. Provides a roadmap through the database data for anyone who wants to query the data or explore new ways to use the data.

Data Repositories

A large-scale data dictionary that includes entity types generated and needed by the latest IS technologies. In the CASE environment, the data repository holds the same types of data that traditional data dictionaries hold, as well as CASE-specific data such as reusable code modules.

Data Normalization Process

A methodology for organizing attributes into tables so that redundancy among the nonkey attributes is eliminated. The output of the data normalization process is a properly structured relational database.

Physical Design Categories and Techniques That DO NOT Change the Logical Design

Adding External Features -Adding Indexes -Adding Views Reorganizing Stored Data -Clustering Files Splitting a Table into Multiple Tables -Horizontal Partitioning -Vertical Partitioning -Splitting-Off Large Text Attributes

Achieving Direct Access

An index tool. Hashing Method - a way of storing and retrieving records. If we know the value of a field of a record that we want to retrieve, the index or hashing method will pinpoint its location in the file and instruct the hardware mechanisms of the disk device where to find it.

Operational Mgmt of Data

At the operational level, for the day-to-day management of the company's production databases, an independent department must be responsible. Data should be managed by an independent group whose loyalty is to the overall company and not to any individual function.

Computer Storage-Character

At the smallest scale, computers store 1 or 0 (on or off, or positive or negative) Bit Smallest unit of storage Either 0 or 1 Byte Grouping of 8 bits Example: 01001101 One byte can store one character (like 'A' or '2') ASCII - American Standard Code for Information Exchange Encodes characters into 8 bit integers http://www.ascii-code.com/ Example: 'A' is 01000001 Example: '2' is 00110010

Data Planning

Begins with determining what data will be needed for future company business efforts and the applications that will support them. Related to strategic data planning is the matter of what hardware and software will be needed to support the company's information systems operations in the future.

Datatypes used in BIS 324

CHAR VARCHAR2 NUMBER DATE

Character Data

CHAR fixed-length character (1 to 2000 bytes) If data shorter, then Oracle will pad with spaces to take up all bytes specified If data longer with trailing spaces, then trailing spaces trimmed If value too large, then Oracle returns error Will be faster Will have less overhead Specified as char(x) where x is the number of bytes Example: Char(2) or char(100) VARCHAR2 variable-length character (1 to 4000 bytes) Includes length and/or end-of-data indicator in storage Specified as varchar2(x) where x is the number of bytes Example: Varchar2(4) or varchar2(300) NVARCHAR2 and NCHAR- national character string for unicode data (multilingual) - extended characters sets using 16 bit characters instead of 8 bits

Indexes

Can be built over any field (unique or nonunique) of a file. Can also be built on a combination of fields. In addition to its direct access capability, an index can be used to retrieve the records of a file in logical sequence based on the indexed field. Many separate indexes into a file can exist simultaneously. The indexes are quite independent of each other. When a new record is inserted into a file, an existing record is deleted, or an indexed field is updated, all of the affected indexes must be updated. -Thus note: it is a space AND performance issue.

Changing Attributes in a Table

Changes the logical design. Substituting a Foreign Key -Substitute an alternate key (Salesperson Name, assuming it is a unique attribute) as a foreign key. -Saves on the number of performance-slowing joins.

Physical Design Categories and Techniques That DO Change the Logical Design

Changing Attributes in a Table -Substituting Foreign Keys Adding Attributes to a Table -Creating New Primary Keys -Storing Derived Data Combining Tables -Combine Tables in One-to-One relationships -Alternative for Repeating Groups -Denormalization Adding New Tables -Duplicating Tables -Adding Subset Tables

Adding attributes to a Table

Creating New Primary Keys -Changes the logical design. -In a table with no single attribute primary key, indexing a multi-attribute key would likely be clumsy and slow. -Create a new serial number attribute primary key for the table. -The current two-attribute primary key of the CUSTOMER EMPLOYEE table can be replaced by one, new attribute. ---- Storing Derived Data -Calculate answers to certain queries once and store them in the database.

Hardware/Software Characteristics

DBMS Characteristics For example, exact nature of indexes, attribute data type options, and SQL query features, which must be known and taken into account during physical database design. Hardware Characteristics Processor speeds and disk data transfer rates.

Responsibilities of Database Admin

DBMS performance monitoring. DBMS troubleshooting. DBMS usage and security monitoring. Data dictionary operations. DBMS data and software maintenance. Database design.

Operational Requirements: Data Security, Backup and Recovery

Data Security Protecting data from theft or malicious destruction and making sure that sensitive data is accessible only to those employees of the company who have a "need to know." Backup and Recovery Being able to recover a table or a database that has been corrupted or lost due to hardware or software failure to the recovery of an entire information system after a natural disaster.

Data Characteristics

Data Volume Assessment -How much data will be in the database? -Roughly how many records is each table expected to have? Data Volatility -Refers to how often stored data is updated.

Training

Data administration may be responsible for training all those in the company who have a reason to understand the company's data and, in some cases, the DBMS environment.

Liason to Systems Analysts and Programmers

Data administrators are responsible for providing support to the systems analysts and programmers in all matters concerning the data needed by an application. Data analysts are generally involved in database design at some level, but the decision of what that precise level of involvement should be is dependent on a number of factors.

Advantages of Data and Database Administration

Data as a shared corporate resources. Efficiency in job specialization. Operational management of data. Managing externally acquired databases. Managing data in the decentralized environment.

Data's Competitive Advantage

Data can provide a competitive advantage for the company. Data administrators are in a unique position to understand how the company "works." It is the responsibility of the data administration function to respond to questions about how the company's business procedures can be adjusted or modified to improve the company's operating efficiency.

Responsibilities of Data Admin

Data coordination. Data planning. Data standards. Liaison to systems analysts and programmers. Training. Arbitration of disputes and usage authorization. Documentation and publicity. Data's competitive advantage.

DBMS Usage and Security Monitoring

Database administrators keep track of which applications are running in the database environment and can track who is accessing the data in the database at any moment. Security: making sure that only authorized personnel access the data. Usage: the need to maintain records on the amount of use the various users make of the database.

How Disk Storage Works

Disks come in a variety of types and capacities -Multi-platter, aluminum or ceramic disk units -Removable, external hard drives. Provide a direct access capability to the data. Example: IBM StorWize V7000 -Up to 48 TB of storage Several disk platters are stacked together, and mounted on a central spindle, with some space in between them. Referred to as "the disk." The platters have a metallic coating that can be magnetized, and this is how the data is stored, bit-by-bit.

Division Remainder Method

Divide the key value of the record that we want to insert or retrieve by the number of record locations that we have reserved. Perform the division, discard the quotient, and use the remainder to tell us where to locate the record. Hashing disallows any sequential storage based on a set of field values. A file can only be hashed once, based on the values of a single field or a single combination of fields. If a file is hashed on one field, direct access based on another field can be achieved by building an index on the other field. Many hashing routines have been developed. The goal is to minimize the number of collisions, which can slow down retrieval performance. In practice, several hashing routines are tested on a file to determine the best "fit." Even a relatively simple procedure like the division-remainder method can be fine-tuned. A hashed file must occasionally be reorganized after so many collisions have occurred that performance is degraded to an unacceptable level. A new storage area with a new number of storage locations is chosen, and the process starts all over again.

Third Normal Form

Does not allow transitive dependencies in which one nonkey attribute is functionally dependent on another. Nonkey attributes are not allowed to define other nonkey attributes. Important points about the third normal form structure are: -It is completely free of data redundancy. -All foreign keys appear where needed to logically tie together related tables. I-t is the same structure that would have been derived from a properly drawn entity-relationship diagram of the same business environment.

Adding External Features

Doesn't change the logical design at all. There is no introduction of data redundancy. Adding Indexes Adding Views

Adding External Features: Adding Views

Doesn't change the logical design. No data is physically duplicated. An important device in protecting the security and privacy of data.

Reorganizing Stored Data

Doesn't change the logical design. No data is physically duplicated. Clustering Files -Houses related records together on a disk.

Adding New Tables

Duplicating Tables -Duplicate tables and have different applications access the duplicates. Adding Subset Tables -Duplicate only those portions of a table that are most heavily accessed. -Assign subsets to different applications to ease the performance crunch.

Converting Entities in Binary Relationships: many to many

E-R diagram with the many-to-many binary relationship and the equivalent diagram using an associative entity. An E-R diagram with two entities in a many-to-many relationship converts to three relational tables. Each of the two entities converts to a table with its own attributes but with no foreign keys (regarding this relationship). In addition, there must be a third "many-to-many" table for the many-to-many relationship. The primary key of SALE is the combination of the unique identifiers of the two entities in the many-to-many relationship. Additional attributes are the intersection data.

Converting ER Diagrams into Relational Tables

Each entity will convert to a table. Each many-to-many relationship or associative entity will convert to a table. During the conversion, certain rules must be followed to ensure that foreign keys appear in their proper places in the tables.

Database Performance

Factors Affecting Application and Database Performance -Application Factors Need for Joins Need to Calculate Totals -Data Factors Large Data Volumes -Database Structure Factors Lack of Direct Access Clumsy Primary Keys -Data Storage Factors Related Data Dispersed on Disk -Business Environment Factors Too Many Data Access Operations Overly Liberal Data Access

Designing Fields

Field: smallest unit of data in database Field design involves choosing the correct data type All relational databases have similar data types, but with slightly different names

File Organizations and Access Methods

File Organization - the way that we store the data for subsequent retrieval. Access Method - The way that we retrieve the data, based on it being stored in a particular file organization.

Steps in Data Normalization Process

First Normal Form Second Normal Form Third Normal Form ---- Once the attributes are arranged in third normal form, the group of tables that they comprise is a well-structured relational database with no data redundancy. A group of tables is said to be in a particular normal form if every table in the group is in that normal form. The data normalization process is progressive. -For example, if a group of tables is in second normal form, it is also in first normal form.

Data Standards

For example, controlling the way that attribute names, table names, and other data related names are formed. For example, insisting on consistency in the way the programs that access the database are written, especially in regard to the database call instructions.

Tables Produced by Logical Database Design Process

Form the starting point of the physical database design process. Reflect all of the data in the business environment. Are likely to be unacceptable from a performance point of view and must be modified in physical database design.

Choosing One-to-one option

Generally keep entities separate if they are important business items that have many attributes Pick the option that gives the LEAST nulls in an entity (in previous slides this is example??)

Splitting a Table into Multiple Tables

Horizontal Partitioning -The rows of a table are divided into groups, and the groups are stored separately on different areas of a disk or on different disks. -Useful in managing the different groups of records separately for security or backup and recovery purposes. -Improve data retrieval performance. -Disadvantage: retrieval of records from more than one partition can be more complex and slower. ------- Vertical Partitioning -The separate groups, each made up of different columns of a table, are created because different users or applications require different columns. -Each partition must have a copy of the primary key. ----- Splitting-Off Large Text Attributes -A variation on vertical partitioning involves splitting off large text attributes into separate partitions. -Each partition must have a copy of the primary key.

Arbitration of Disputes and Usage Authorization

If there is a dispute over data sharing, then the data administration group acts as an arbitrator between the disagreeing parties.

Combining Tables

If two tables are combined into one, then there must surely be situations in which the presence of the new single table allows us to avoid joins that would have been necessary when there were two tables. -Combination of Tables in One-to-One Relationships Disadvantages: the tables are no longer logically as well as physically independent. retrievals of salesperson data alone or of office data alone could be slower than before. storage of data about unoccupied offices is problematic and may require a reevaluation of which field should be the primary key. ------- -Alternatives for Repeating Groups: If repeating groups are well controlled, they can be folded into one table. -------- -Denormalization It may be necessary to take pairs of related, third normal form tables and to combine them, introducing possibly massive data redundancy. Unsatisfactory response times and throughput may mandate eliminating run-time joins. Since a salesperson can have several customers, a particular salesperson's data will be repeated for each customer he has.

Data Coordination

In the centralized/decentralized environment, with data and copies of data scattered among mainframe computers, local area network servers, and even PCs, the possibility of inconsistency and error increases. Data coordination becomes the job of the data administrators, by which they maintain a reasonable amount of control over the company's data.

Simple Linear Index

Index is ordered by Salesperson Name field. The first index record shows Adams 3 because the record of the Salesperson file with salesperson name Adams is at relative record location 3 in the Salesperson file. An index built over the City field. An index can be built over a field with nonunique values. An index built over the Salesperson Number field. Indexed sequential file - the file is stored on the disk in order based on a set of field values (salesperson numbers), and an index is built over that same field. French 8, would have to be inserted between the index records for Dickens and Green to maintain the crucial alphabetic sequence. Would have to move all of the index records from Green to Taylor down one record position. Not a good solution for indexing the records of a file.

Data Normalization Technique

Input: -all the attributes that must be incorporated into the database -a list of all the defining associations between the attributes (i.e., the functional dependencies). >a means of expressing that the value of one particular attribute is associated with a single, specific value of another attribute. >If we know that one of these attributes has a particular value, then the other attribute must have some other value.

Passive and Active Data Dictionaries

Passive Data Dictionary -Used for documentation purposes. -Data about the entities in the IS environment are entered into the dictionary and cross-referenced as one-to-many and many-to-many relationships. -A self-contained database used for documenting the IS environment. Active Data Dictionary -Interacts with the IS environment on a real-time basis. -Input into the data dictionary -Output from the data dictionary

Efficiency in Job Specialization

Many of the functions involved in the management of data are highly specialized and require specific expertise. This argues for a full-time staff of specialists who do nothing but manage a company's data and databases.

Database Administration

More operationally oriented. Responsible for the day-to-day monitoring and management of the company's various active databases. Responsible for providing liaison support to program designers during application development. Carries out many of the policies set by data administration.

Second Normal Form

No Partial Functional Dependencies -Every nonkey attribute must be fully functionally dependent on the entire key of that table. -A nonkey attribute cannot depend on only part of the key. --- In SALESPERSON, Salesperson Number is the sole primary key attribute. Every nonkey attribute of the table is fully defined just by Salesperson Number. Similar logic for PRODUCT and QUANTITY tables.

Computer Storage Numbers

Numbers are stored in binary form 2 stored as 00000010 4 stored as 00000100 255 stored as 11111111 256 stored as 0000000100000000 Note that the number 2 stored very differently than character '2' -Number 2 is 00000010 -Character '2' is 00110010

Database Performance Monitoring

One of the key functions performed by database administration. It is important to know how fast the various applications are executing as part of ensuring that response time requirements are being met.

Managing Data in the Decentralized Environment

Permits user departments all over the company to handle some or all of their information systems needs on their own, without having to rely on the central information systems organization. Decentralization is a fact of life to a greater or lesser degree in virtually all companies. Many people are quite content to develop their own databases on their PCs. However, there is a very strong argument that says that data and database administration are even more important in a decentralized environment than in a centralized one. Most companies have a hybrid centralized/decentralized environment.

Inputs to Physical Database Design

Physical database design starts where logical database design ends. The well structured relational tables produced by the conversion from ERDs or by the data normalization process form the starting point for physical database design. Business Environment Requirements Data Characteristics Applications Characteristics Operational Requirements Hardware and software characteristics

Disk Storage

Primary (Main) Memory - where computers execute programs and process data -Very fast -Permits direct access -Has several drawbacks >relatively expensive >not transportable >is volatile -8 GB typical in consumer laptops -Our Oracle DB for class: >Vmware server with 32 GB RAM >4GB allocated to fsl2 (our server) Secondary Memory - stores the vast volume of data and the programs that process them Data is loaded from secondary memory into primary memory when required for processing. 500 GB typical hard drive on consumer laptops

The Index

Principal is the same as that governing the index in the back of a book. The items of interest are copied over into the index, but the original text is not disturbed in any way. The items in the index are sorted. Each item in the index is associated with a "pointer."

Business Environment Requirements: Response Time Requirements

Response time is the delay from the time that the Enter Key is pressed to execute a query until the result appears on the screen. What are the response time requirements?

Steps in Finding and Transferring Data

Seek Time - The time it takes to move the access arm mechanism to the correct cylinder from whatever cylinder it's currently positioned. Head Switching - Selecting the read/write head to access the required track of the cylinder. Rotational Delay - Waiting for the desired data on the track to arrive under the read/write head as the disk is spinning. Transfer Time - The time to actually move the data from the disk to primary memory once the previous 3 steps have been completed.

Data Dictionary: Relationships

Table (or file) Construction: Which attributes (or fields) appear in which tables (or files). Security: Which people have access to which databases or tables or files. Impact of Change: Which programs might be affected by changes to which tables or files. (Note: this has become much less of an issue due to the data independence of relational databases.) Physical Residence: Which tables or files are on which disks. Program Data Requirements: Which programs use which tables or files. Responsibility: Which people are responsible for updating which databases or tables or files.

Database Mgmt

The "people side" of database management consists of two parts: Data Administration Database Administration

Data Dictionaries

The IS function has been so busy developing and running systems to support all the other corporate functions that it took a long time before it could invest the resources to develop information systems to support itself. Metadata - data about data Data dictionary - a database about data In addition to tracking the basic facts about the represented entities, a data dictionary must keep track of the relationships between the entities. Represents the many-to-many relationship between the tables and attributes shown in the data dictionary's TABLES table and ATTRIBUTES table.

First Normal Form

The attributes under consideration have been listed in one table, and a primary key has been established. The number of records has been increased so that every attribute of every record has just one value. The multivalued attributes have been eliminated. --- First normal form is merely a starting point in the normalization process. First normal form contains a great deal of data redundancy. -Three records involve salesperson 137, so there are three places in which his name is listed as Baker, his commission percentage is listed as 10, and so on. -Two records involve product 19440 and this product's name is listed twice as Hammer and its unit price is listed twice as 17.50.

Access Arm Mechanism

The basic disk drive has one access arm mechanism with arms that can reach in between the disks. At the end of each arm are two read/write heads. The platters spin, all together as a single unit, on the central spindle, at a high velocity.

Data Normalization Check

The basic idea in checking the structural worthiness of relational tables, created through E-R diagram conversion, with the data normalization rules is to: -Check to see if there are any partial functional dependencies. -Check to see if there are any transitive dependencies.

DBMS Troubleshooting

The database administrators should be the troubleshooting interface. Make an assessment of what went wrong and coordinate the appropriate personnel needed to fix it, including systems programmers, application programmers, and the data administrators themselves.

Uses and Users

The heaviest users of the data dictionary will be IS management and the data administration and database administration functions under them. Systems analysts and program designers can use the data dictionary: -as a source of information about what entities, attributes, and so forth already exist in the IS environment. -as a documentation device for new information.

B+ Tree Index

The most common data indexing system in use today. -Used by Oracle Unlike simple linear indexes, B+-trees are designed to comfortably handle the insertion of new records into the file and to handle record deletion. An arrangement of special index records in a "tree." A single index record, the "root," at the top, with "branches" leading down from it to other "nodes." The lowest level nodes are called "leaves." Think of it as a family tree. The data records are at the leaves. Each key value in the tree is associated with a pointer that is the address of either a lower level index record or a cylinder containing the salesperson records. The index records contain salesperson number key values copied from certain of the salesperson records. Each index record, at every level of the tree, contains space for the same number of key value/pointer pairs. Each index record is at least half full. The tree index is small and can be kept in main memory indefinitely for a frequently accessed file. This is an indexed-sequential file, because the file is stored in sequence by the salesperson numbers and the index is built over the Salesperson Number field. B+-tree indexes can also be used to index nonkey, nonunique fields. In general, the storage unit for groups of records can be the cylinder or any other physical device subunit. Say that a new record with salesperson number 365 must be inserted. Suppose that cylinder 5 is completely full. The collection of records on the entire cylinder has to be split between cylinder 5 and an empty reserve cylinder, say cylinder 11. There is no key value/pointer pair representing cylinder 11 in the tree index. The index record, into which the key for the new cylinder should go, which happens to be full, is split into two index records. The now five key values and their associated pointers are divided between them.

Hashed Files

The number of records in a file is estimated, and enough space is reserved on a disk to hold them. Additional space is reserved for additional overflow records. To determine where to insert a particular record of the file, the record's key value is converted by a hashing routine into one of the reserved record locations on the disk. To find and retrieve the record, the same hashing routine is applied to the key value during the search.

Logical Database Design

The process of deciding how to arrange the attributes of the entities in the business environment into database structures, such as the tables of a relational database. The goal is to create well structured tables that properly reflect the company's business environment.

Physical Database Design

The process of modifying a database structure to improve the performance of the run-time environment. We are going to modify the third normal form tables produced by the logical database design techniques to make the applications that will use them run faster.

Converting a Simple Entity

The table simply contains the attributes that were specified in the entity box. Salesperson Number is underlined to indicate that it is the unique identifier of the entity and the primary key of the table.

Converting Entities in Binary Relationships: One to many

The unique identifier of the entity on the "one side" of the one-to-many relationship is placed as a foreign key in the table representing the entity on the "many side." So, the Salesperson Number attribute is placed in the CUSTOMER table as a foreign key.

Database Design

There is a wide range in database administration responsibilities for database design. In the centralized environment, database administration is responsible for physical database design and possibly logical database design. In the decentralized environment, database administration role is usually of a consultant.

Candidate Keys as Determinants

There is one exception to the rule that in third normal form, nonkey attributes are not allowed to define other nonkey attributes. The rule does not hold if the defining nonkey attribute is a candidate key of the table. Candidate keys in a relation may define other nonkey attributes without violating third normal form.

DBMS Data and Software Maintenance

These activities include: installing new versions of the DBMS installing "fixes" or "patches" to the DBMS performing backup and recovery operations any other tasks related to repairing or upgrading the DBMS or the database.

Converting Entities in Unary Relationships: many to many

This relationship requires two tables in the conversion. The PRODUCT table has no foreign keys. A second table is created since in the conversion of a many-to-many relationship of any degree — unary, binary, or ternary — the number of tables will be equal to the number of entity types (one, two, or three, respectively) plus one more table for the many-to-many relationship.

Converting Entities in Binary Relationships: One-to-One

Three different ways 1.The two entities are combined into one relational table. 2.Separate tables for the SALESPERSON and OFFICE entities, with Office Number as a foreign key in the SALESPERSON table. 3.Separate tables for the SALESPERSON and OFFICE entities, with Salesperson Number as a foreign key in the OFFICE table.

Business Environment Requirements: Throughput Requirements

Throughput is the measure of how many queries from simultaneous users must be satisfied in a given period of time by the application set and the database that supports it.

The SQL Update, Insert, and Delete Commands

UPDATE SALESPERSON SET COMMPERCT = 12 WHERE SPNUM = '204'; INSERT INTO SALESPERSON VALUES ('489', 'Quinlan', 15, '2011', '59'); DELETE FROM SALESPERSON WHERE SPNUM = '186';

Converting Entities in Unary Relationships: one to many

Very similar to the one-to-one unary case.

Data as a Shared Corporate Resource

Virtually all aspects of business have become dependent on their information systems and the data flowing through them. Data may well be the most important corporate resource because it describes all of the others. The effective use of its data can give a company a significant competitive advantage. Data, like other resources, tends to be scarce. The dedicated departments that manage the company's data are the data administration and database administration departments.

Application Characteristics

What is the nature of the applications that will use the data? Which applications are the most important to the company? Which data will be accessed by each application? Application Data Requirements Application Priorities

Application Characteristics: Priorities

When a modification to a table proposed during physical design that's designed to help the performance of one application hinders the performance of another application, which of the two applications is the more critical to the company?

Primary and Secondary Memory

When a person needs some particular information that's not in her brain at the moment, she finds a book in the library that has the information and, by reading it, transfers the information from the book into her brain.

Adding External Features: Adding Indexes

Which attributes or combinations of attributes should you consider indexing in order to have the greatest positive impact on the application environment? -Attributes that are likely to be prominent in direct searches >Primary keys >Search attributes -Attributes that are likely to be major players in operations, such as joins, SQL SELECT ORDER BY clauses and SQL SELECT GROUP BY clauses. What potential problems can be caused by building too many indexes? Indexes are wonderful for direct searches. But when the data in a table is updated, the system must take the time to update the table's indexes, too.

Application Characteristics: Data Requirements

Which database tables does each application require for its processing? Do the applications require that tables be joined? How many applications and which specific applications will share particular database tables? Are the applications that use a particular table run frequently or infrequently?

Data Ownership

Who owns the data? Since data is a resource of value to the company, the data "belongs" to the company's owners or stockholders. Practically, data is controlled by its user or primary user. If ownership has been established and a new application requires the use of existing data, then it is the job of data administration to act as an intermediary and approach the owner of the data with the request for data sharing.

Converting Entities in Unary Relationships: one to one

With only one entity type involved and with a one-to-one relationship, the conversion requires only one table.


Kaugnay na mga set ng pag-aaral

I personaggi di «Ciao Professore»

View Set

AUD- Ch. 13 PPE, Depreciation, and depletion

View Set

Все времена Simple + Все времена Continuous

View Set

Chapter 10: Design User Interface

View Set

Unit 6: Instruments of Real Estate Finance

View Set