Business Intelligence 1

Ace your homework & exams now with Quizwiz!

What is the difference between transaction-based and non-transactional-based tables?

(Used when establishing many-to-many relationships in an ER Model) Transaction based tables store measures and facts about the business. Records are typically inserted, updated, and deleted as transactions occur. They are large in volume. Ex. Store billing Non-transactaction-based tables store descriptors of the company. Ex. customers, employees, product name. Smaller in size There are many more non-transactional-based tables than transaction tables in a given ER model

What is the ETL Process?

(extract- transform- load) The process of how the data is transferred from the source system to the data warehouse. there is also a cleaning step. 1. extract - The main objective of the extract step is to retrieve all the required data from the source system with as little resources as possible 2. clean- ensures the quality of the data. Performs basic unification rules. I.e. converts phone numbers and zip codes to a standardized form 3. Transform - transforms measured data into the same dimension using the same units so that they can later be joined. requires joining data, forming aggregates, etc. 4. Load - Target of the load is often a database.

What is a data mart?

-A subset of a data warehouse typically consisting of a single subject area - A departmental small-scale "DW" that stores only limited/relevant data - Dependent - Independent

Different DW Architectures: Independent Data Mart

-DM's that are independent of one another -Do not provide "a single version of the truth" -non-conformed dimensions that make it difficult to analyze data across the marts Source systems -> Staging area -> Independent DM->End-user access and application

Different DW Architectures: Hub-and-Spoke Architecture (inmon)

-First, you must analyze data requirements -architecture is developed subject by subject -It is normalized to 3NF -Dependent Data Marts obtain data from the data warehouse -Dependent data marts may be developed for departmental, functional area, or special purposes and may be normalized, denormalized, or summarized dimensional data structures based on user needs. Source systems->Staging area->Normalized Relational warehouse (atomic data) OR dependent data marts (summarized/some atomic data)<-end user access and applications

What is a Transaction Log?

-It is a basic .txt file that records all changes in the database system in case of a crash. [start_transaction, T]; [write_item, T, X, old value, new value]; [read_item, T, X]; [commit, T], [abort, T]

Different DW Architectures: Data Mart Bus Architecture with Linked Dimensional Data Marts (Kimball)

-Starts with a specific business requirement analysis then is built for this single business process using dimensions and measures that will be used with other data marts (conformed dimensions) -Logically integrated because other data marts with be developed with these conformed dimensions -star schema to provide dimensional view of data Source system->Staging area->Dimensionalized data marts linked by conformed dimensions<- end user access and applications

Why would you use information packages?

-define common subject areas -design key business metrics -decide how data must be presented -determine how users will aggregate or roll up -decide the data quantity for analyses or queries -decide how data will be accessed -establish data granularity- -determine the frequency for data refreshing -Ascertain how information must be packaged

Dimensional Modeling for DW: Star schema

-the facts in the middle have dimensions attached describing the facts -downside:hard to link to other upside: easy to understand, designed around processing needs of users, direct access to data business processes. You can only link if you plan for the link in the beginning. -captures critical measures -views along dimensions

What is the fact table in dimensional models?

1 specific business process that is linked to some sort of dimension (ex. sales, order-price) Typically numerical facts, text facts are rare. *the primary key is a composite Primary key of all other tables -you can also use a surrogate key b/c it takes up less space

What is the process of dimensional design?

1)Choose the business process 2) declare the grain -ex. hourly totals, individual transaction 3)Identify Dimensions that will apply to the facts. Ex. Time 4) Identify the facts. Ex. Units sold

What are the 4 basic constructs of the midlevel data model?

1. A primary grouping of data -holds attributes that only appear one for each major subject area 2. A secondary grouping of data -Holds data attributes that can exist multiple times for each major subject area 3. A connector -Relationships of data between major subject areas. Foreign key 4. "Type of" data -Left data is supertype, right data is subtype

Why would you do Data Fragmentation?

1. Allows us to break a single object into two or more fragments 2. Each fragment can be stored at any site over a computer network 3. Data fragmentation information is

What are the types of loads that are made into the data warehouse from the operational environment?

1. Archival data -sometimes not cost effective to load old data 2. Data currently contained in the operational environment 3. Ongoing changes to the data warehouse environment from the changes that have occurred in operational environment since the last refresh -biggest issue for organizations

What are the properties of Transaction? (ACID)

1. Atomicity -it has to be done entirety or not at all 2. Consistency -moves the database from one consistent state to another (bank example) 3. Isolation -Transaction should not make its updates visible to other transaction until it is committed. (the DB item is locked during the initial transaction) 4. Durability -Once the transaction changes the database and commits changes, the changes are never lost.

Why do we use surrogate keys as primary keys for dimensional tables?

1. Because it avoids built in meanings 2. Avoids use of production meaning keys -You do not want to use either of those when setting a primary key for a dimension table because it can lead to many problems while aggregating data. If a new customer is replaced with an old customers number in the operational system, then when you're aggregating data it will be pulling data from a retired customer as well as the new customer. Another example is if you move an item to a different warehouse. If you used the product number from the operational number, it will have changed and you will have a new number for the existing item, which will cause problems when aggregating. -It is still common practice to add the production meaning keys as non key attributes in the tables.

What are the properties of HDFS?

1. Can take unstructured or structured data. Cannot update, must drop then re-add. 2. Relies on commodity hardware, so nicer software than normal, data loss can happen, and there is no security or encryption. It is not the answer for everything

What are the characteristics of the fact table?

1. Concatenated Keys- Rows in the fact table are identified as primary keys of the dimensional tables. The primary key of the fact table is a concatenated key made up the foreign keys from the dimensional tables. 2. Grain or level of data identified. 3. Full additive measures 4. semi-additive measures -rows in the fact table that can not be added. For example, marginal cost 5. table deep, not wide- very narrow with columns, very wide with rows 6. Sparse data- understand that their can be gaps. Holidays or weekends there will be null data in certain rows 7. Degenerate dimensions- Numbers in the fact table that are just references. Ex. Order number, product number. They can be useful so we keep them on the fact table

What are the goals for distributed databases?

1. Create a single-image, transparent environment where the database users and application programs can work at the local level and still be able to access and share data with other sites in a network. 2. Distribution Transparency: execute global queries and transactions as though the database is a centralized single database 3. Hide performance complexities of the distributed databases from users and applications programs

What are the 3 major processes of data integration?

1. Data access- can our tools access relevant info? 2. data federation - can we integrate one view across different data stores? with a consistent view 3. Change capture - will it document changes from source to destination?

Why use a data warehouse?

1. Data integration through the subject areas with a company-wide view of high-quality information 2. data integration through time to provide a time-perspective view of organization data 3. data integration of internal and external data to provide a complete view of the business performance

What are advantages of STAR Schema?

1. Easy for users to understand -The fact that it's denormalized and the Joins are easy for the decision makers to understand. 2. Optimizes the navigation through the database -even information that believe would be difficult to find is fairly easy to use in this schema. Ex. Easy to navigate which supplier is supplying a car dealership with cars that have chipped paint 3. Most suitable for query processing 4. STARjoin and STARindex -Query processor software

What are file systems?

1. Electronic storage medium invented in the 1950s 2. Led to high level programming languages (Cobol) 3. Batch processing -Looks at all the data at the same time 4. Many applications -made it easier to make different applications for recording data according to the end user (i.e. marketing department, accounting department, finance department all had different applications)

What are the kinds of data integration options?

1. Enterprise Application Integration (EAI) and Enterprise Data Replication (EDR) - pushes data into DWH - Service oriented architectures 2. Enterprise Information Integration (EII) - data is not physical, has to be requested in a 'pull system'

Why is DQ a problem today?

1. External data sources 2. Redundant data storage and inconsistent meta data 3. data entry 4. lack of organizational commit

What are the steps for data quality improvement?

1. Get business buy-in 2. perform data quality audit 3. Establish data stewardship program 4. improve data capture

What are the 3 different data models?

1. High level of modeling- ER Diagrams -features entities and relationships 2. Mid level data model -after high level is created, midlevel is made -called a data information set (DIS) -Each entity gets its own midlevel model 3. Low level data model- Physical data model -Extending the midlevel data model to include keys and physical characteristics of the model -looks like a series of tables called relational tables *then you must factor in the optimization of performance characteristics* -granulating and partitioning

What are the 2 kinds of Data Fragmentation?

1. Horizontally - Split data and store different rows in multiple locations -defined by a SELECT statement (rows) -relation recovered by a UNION statement 2. Vertically - when the data set for a relation is fragmented into 2 columns. Both columns contain half of the total attributes -Defined by a PROJECT statement with a primary key in each split table (columns) -Relation recovered by the natural JOIN *also there is mixed fragmentation

What are the steps for converting an ER model to a dimension model?

1. Identify the business processes from the ER model 2. Identify many-to-many tables int he ER model to convert to fact tables 3. Denormalize remaining tables into flat dimension tables -This is done by combining tables that have similar information. You have to make a surrogate key to be the primary key in the dimension table. The primary key stays in the fact table because it is used for mapping 4. identify date and time from the ER model

Why use a distributed DBMS?

1. Improved performance -a distributed DBMS fragments the database to keep data closer to where it is needed most -this will reduce data management time significantly 2. easier expansion (scalability) allows you to add new nodes (computers) any time without changing the configuration

What are the common factors influencing which data architecture to use?

1. Information Interdependence between organizational units -the need to share information among organizational units 2. Upper management's information needs 3. Urgency of need for data warehouse 4. Nature of end-user tasks 5. Constraints on resources 6. Strategic view of the warehouse prior to implementation -the extent to which implementing a data warehouse was viewed as being important to supporting strategic objectives 7. Compatibility with existing systems 8. Perceived ability of the in-house IT staff 9. Technical issues 10. Expert influence

What are the different types of NoSQL?

1. Key-value store -no enforced schema strucutre -spits back whatever value is but only handles the primary key 2. Document -storage of documents that you can query. Ex. MongoDB 3. (Wide) Column family -key that points to data types and also column families --column families are columns that fit together -benefits are that this model is complex but easier to retrieve individual pieces of data compared to others. 4. Graph -Good for databases of networks -Node and arc graph structure. Some support ACID

What were the stages of Cobol? (How did they evolve)

1. Machine language (0,1 only) 2. Assembly language 3. Procedural languages -any kind of scripting language 4. Object-oriented -Java, C++ 5. Visual -Access, DreamWeaver. any program that allows you to program visually

What are the different kinds of Data Management systems we have used throughout history?

1. Manual Systems 2. File systems 3. Centralized database systems 4. Distributed database systems 5. Client/server databases 6. Data warehouses/Data marts

What are implications of good Data Quality?

1. Minimize risks to projects/initiatives 2. Make timely business decisions 3. Ensure regulatory compliance 4. Expand customer base

What is a Centralized System?

1. Need for current data 2. Data could be stored among computers on a centralized system -1 super computer, several dumb terminals -Limited geographical range

What are the types of controls?

1. Operational control -Day-to-day 2. Managerial control -acquiring and managing resources 3. Strategic planning -Long-term visionary thinking Each type of control has a different application for the type of decision (study these) and all are useful in BI

What are the steps of dimensional modeling?

1. Requirement gathering 2. Requirements definition document -information package diagrams are an essential part of this document. They form the basis for the logical data design for a data warehouse made up of data marts 3. Data design -results in a dimensional data model. It should primarily facilitate queries and analyses

What are the types of decisions?

1. Structured (well-structured) -Standard solution, clear and consise. Ex. 2+2 2. Semistructured (ill-structured) - combination of both structured and unstructured decisions 3. Unstructured (ill-structured) -Difficult, no clear way to solve, unclear w/deliverables

What are some characteristics of a data warehouse?

1. Subject-oriented -decisions are made about subjects -ex. which products do we sell? 2. Integrated -consistent format/meaning across organization -ex. binny | samuel vs Binny Samuel 3. Time-variant (time series) -incorporate time into all data -Think long term trends 4. Nonvolatile -not updated -time-stamped

What are the reasons for iterative development when developing a data warehouse? (inman)

1. The industry track record of success strongly suggests it 2. The end user is unable to articulate many requirements until the first iteration is done 3. Management will not make a full commitment until at least a few results are actually tangible 4. visible results must be seen quickly

What are the principles of dimensional modeling?

1. The model should provide the best data access 2. the whole model must be query-centric 3. it must be optimized for queries and analyses 4. the model must show that the dimension tables interact with the fact table 5. it should also be structured in such a way that every dimension can interact equally with the fact table 6. the model should allow drilling down or rolling up along dimension hierarchies *fact table in the middle and dimension tables around it satisfy these requirements*

What are the current data management trends?

1. They have specialized needs and applications 2. Relax the ACID assumptions -BigData and No SQL are unstructured data 3. No need for the full overhead of the relational model

What are the V's of Big data?

1. Volume -Big data is in terms of volume. GB, MB, kB. There is lots of data being created 2. Velocity -Data is being created at a really fast pace 3. Variety -structured or unstructured 4. Veracity -Uncertainty; data quality -lots of false data out there 5. Value -can we create value for decision making?

What are the takeaways from the airfare scenario?

1. You can use data to make decisions 2. The decisions may be complex - with tradeoffs- which is why you may need to automate the decisions 3. The rules you use will evolve over time, be refined 4. You can measure the consequences of decisions. 5. You can tie decisions to business goals

Why use relational models for DW? (inman)

1. they are flexible design for DW 2. they have a versatile design for DW -can be combined in many different ways, many different views for DW can be supported 3. relational models are easier to maintain for DW

What are the differences in data quality between the first 3 normal forms?

1NF: relation has no multi-valued attribute 2NF: no partial key dependencies, non-key attributes not entirely dependent on entire part of relation 3NF: no transitive dependencies (functional dependencies between non-key attributes)

What is a data warehouse?

A database warehouse is an integrated, subject-oriented, time-variant, nonvolatile database that provides support for decision making. -The key technical variable is a database -It is simply a *type* of database - It can be used as a motivation of why we need to store data.

What is an Operational data store?

A type of DB often used as an interim area for a data warehouse that tends to provide fairly recent information. -good for short term mission critical decisions -Oper mart - an operational data mart

What is Three Schema Architecture?

ANSI/SPARC needed to define what a database is 1. External level (individual user views) -Filters the data according to application 2. Conceptual level (community user view) 3. Internal level (storage view)

What is the difference between Additive and semi-additive measures in the fact table? What are non-additive measures?

Additive are measures that can be added across any dimension Semi-additive are measures that can added in only some dimensions There is also non-additive which is ratios and percentages. Ex. Gross margin

What are the advantages of keeping the fact table at the lowest grain? What are the tradeoffs?

Advantages: 1.If it is at the lowest grain, users can drill down or roll up very efficiently. There will also be no need to go to the operational systems because the lowest possible levels are in the data warehouse as is. 2. Another advantage is it allows for more "graceful" extensions to the fact table 3. Good for data mining Disadvantages: 1. Lowest grain means large numbers of fact table rows -you have to pay for more storage and more maintenance

What is a RBDMS? Why use one?

An impedance mismatch. Because interjoins are very expensive in terms of in-and-out for a database. Storage is expensive

What are attribute hierarchies in dimension tables?

Attributes represent properties of dimension and fact tables. Attribute hierarchies allow you to view the data is different forms, denormalizing the data. Ex. Year->quarter->month

What is BASE?

Basically Available -Use replication and partitioning to reduce the likelihood of unavailability and complete failure. -Designed to have nodes fail Soft State -the system is not in a solid state due to relaxed consistencies; no write consistency. It writes the code later Eventual Consistency -Consistency is not enforced at transaction commit like in RDBMS. Values availability.

Why does each dimension table need to have a direct relationship with the fact table?

Because every dimension table with its attributes must have an equal chance of participating in a query to analyze the attributes in the fact table

Describe dimensions in information package

Because it is a table that structures the data in a way that you can use later. -has categories in different dimensions. Always has a dimension for time. the dimensions are useful for allowing you to sort/analyze your data. Dimensions are hierarchical

Why is it called STAR schema? What does it answer?

Because it is shaped like a star, the dimension tables are the points and the fact table is the middle. It answers the questions of what, when, by whom, and to whom. It can be easily understood by the users

What is a dimension table?

Companions of the fact table that are textual context associated with business process events. It has a single primary key that is usually a surrogate key. Can contain one or more hierarchies

What are the tradeoffs for Data Warehouse Architecture?

Compatibility, Technical expertise, money, time, ease of use

What is Dimensional Modeling used for?

DM is used to model data warehouses. This is because: 1. Data warehouses are meant to answer questions on overall process 2. Data warehouses focus is on how managers view the business 3. Data warehouses review business trends 4. Information is centered around a business process 5. Answers show how the business measures the process 6. the measures are studied in many ways along several business dimensions

What are distributed systems?

Database is not just in one physical location; network connections -Allows you to view data anywhere -The motivation for developing this system was global competion, also data accessibility and reliability, and it could locate where data was gathered and/or shared while also making data sharing easier

How do you determine date and time dimensions when turning an ER model into a dimension model?

Dates are usually stored in the form of a date timestamp column inside the ER model. They are usually stored in the transaction-based tables

What is ER Modeling used for?

ER Modeling is used in OLTP and operational systems. This is because: 1. OLTP systems capture details of events or transactions 2. OLTP focuses on individual events 3. OLTP is a window into micro-level transactions 4. Picture at detail level necessary to run the business 5. Suitable only for questions at transaction level 6. Data consistency, non-redundancy, and efficient data storage is critical

Data Integration via ETL - how does the process Extract?

Extracts from data sources (packaged applications, legacy system, other internal application). The techniques it uses is push vs pull. -push is when the source sends data to the etl -pull is when the etl retrieves the data itself

What is an additive fact? Semi-additive? Non-additive?

Facts that can be added up across all the dimensions and all combination of dimensions Semi-additive is when it can be summed across some but not all dimensions non-additive fact-cannot be summed across all dimensions. ex. unit prices

What are manual systems?

First form of databases 1. Stone tablets are the first known data record. Used for assets and taxes 2. Eventually evolved to writing on paper 3. Next evolved to Punch card system -Great for counting, tabulating -IBM evolved from a punch card machine -Eventually the system could encode data

What is the stability analysis?

Grouping attributes together based on their propensity for change

What is Hadoop?

Hadoop Distributed File System (HDFS) Hadoop MapReduce (MR) A set of machines running both are known as a Hadoop cluster

What does a distributed database do that a centralized doesnt?

It does everything a centralized DBMS does plus: 1. Track datra distribution 2. Choose global query/transaction execution 3. Transmit queries and data among sites 4. Maintain global data consistency (e.g. udpate multiple copies of data) 5. Detect/recover from communication and site failures 6. Provide data security

What is a Transaction and why do we need them?

It is a logical unit of work that is either done or not done. All or nothing. We need them for recovery of data when there are failures. Also to roll back data (occurs when there's an unsuccessful end of transaction)

Why is ETL Important?

It is the process of how data is loaded from the source system to the data warehouse

What is data profiling?

It is where you diagnose the data problem. You do this by structure discovery - i.e. pattern for phone number or data with null values content discovery - standardization with formatting (GM, General Motors), Frequency counts and outliers, business rule validation relationship discovery

What are the differences between Inmon and Kimball's approach on building a data warehouse?

Kimbell's approach is the datamart approach. It is more nimble and is bottom-up driven. Fairly simple, primary audience is the end-user Inman's approach takes more time, more cost, and relies heavily on data modeling. It can handle more users and more data than the DM approach. it is top-down driven. quite complex, primary audience is IT professionals

What is enterprise Data warehouse?

Large scale data warehouse used across the enterprise for decision support

What is data accuracy?

Mapping to the "real world" two characteristics: 1. form - i.e. the form can change on the date 31/1/2017 2. Content Ex. January 31, 2017

What is an information package? (kimball)

Method for determining and recording information requirements for a data warehouse. Identify the measurements and relevant dimensions that are needed in a DW; one per subject area. *they show the metrics, business dimensions, and the hierarchies within individual business dimensions*

What is NoSQL and why use it?

Not only SQL; handles complexity better

What is Physical Data Independence?

Note definition: You can make changes to how you store data and it will not affect how it is viewed Slide definition: The capability to change the physical storage structure or access methods without having to change conceptual schemas

Different DW Architecture: Centralized Data Warehouse

Similar to Hub-and-spoke except there are no dependent data marts -Queries and applications from both the relational data and the dimensional views

What is the fact grain? What are the two kinds?

The level of detail in a fact table -determined by the intersection of all the components of the primary key, including all foreign keys and other primary keys 1. Transactional grain: Finest level of detail (Kimball recommends this) 2. Aggregate grain: more summarized -periodic snapshot, accumulating snapshot

What happens when a query is made against the data warehouse?

The results of the query are produced by combining or joining one or more dimension tables with the fact table >the joins are between the fact table and individual dimension tables >the relationship of a particular row in the fact table is with the rows in each dimension table >these individual relationships are clearly shown as the spikes of the STAR schema

What is Brewers' CAP theorem?

There will be tradeoffs with how you build your DBMS. Lots of tradeoffs between availability and consitency. -Any distributed system can support two of the following characteristics: 1. Consistency 2. Availability 3. Partition Tolerance

what are the characteristics of a dimensional table?

They have textual attributes that are descriptors of the components within the business dimensions. Users compose their queries based on these dimensions. Ex. Brand=bigparts, State=Maine >Table is wide. Not uncommon to have 50+ attributes in a table. >they are not directly related Ex. Product name and package size in the Product table >it is flattened out, not normalized >Drilling down, rolling up- You have the ability to get to the details from higher levels of aggregation to lower levels of details. Ex. zip code, city, state form a hierarchy. Total sales by state, then total sales by city, then total sales by zip code is a drill-down of the hierarchy. Fewer records than fact table. A product dimension table for a company may have 500 rows, but the fact table will have over a million.

Different DW Architecture: Federated

This architecture leaves existing decision support structures (operational data store, data warehouse, data marts)in place. Based on business requirements, data is accessed from these sources -advocated as a practical method with firms that already have a complex existing decision support environment and do not want to rebuild existing data warehouses, data marts, legacy systems ->logical and physical implementation of common data elements <- end user access and applications

How does the high level of modeling and midlevel modeling overlap?

When a relationship is identified at the ER level, two connectors are formed in the DIS. For example, if there is a relationship identified between Account and Customer, then two connectors will form in the midlevel. The connector indicates that a customer can have multiple accounts connected to it. and vice versa, there will be another connector that indicates that an account can have multiple customers using it.

What is logical data independence?

`The capability to change (e.g. "add to") the conceptual schema (logical model of data) without having to change external schemas

What is normalization?

formal process of deciding which attributes should be grouped in a relation (table). We only about the first 3 normal forms for DW. It is a way to certify data quality because it has to be a certain quality when it is in a certain form

Why does Kimball want us to store at the finest level?

gives the most amount of data and customers will always want more details

What is high-level data model and low-level data model? (inman)

high-level data model shows how the major subject areas of the data warehouse should be divided. Typical high-level subject areas are customer, product, shipment, order, part number, etc. low-level data is where physical database design is done. partitioning is done, foreign key relationships are defined to the dbms, indexes are defined, and other physical aspects of design are completed.

What is the midlevel data model? (inman)

identifies keys, attributes, relationships, and other aspects of the details of the DW. "fleshes out" the details of the high-level data model. Inman uses the ERD for high level conceptual. the midlevel also needs a way to measure time because you need to see data over time. Joins are to be eliminated because they take up large amounts of space

What are the two basic database access operations for a transaction?

read_item (X): read a database item X into program variable X write_Item (X) write a program variable X into the database item X

What is data replication?

refers to the storage of data copies at multiple sites served by a computer network -copies data, keeps it in sync across various sites. -fragment copies can be stored at several sites to serve specific information needs

What does it mean to have "quality data"?

satisfies the requirements of its intended use with information that is: 1. Accuracy 2. Timeliness: is the data up to date? 3. Completeness: missing information 4. revelance: all necessary data? 5. Understood: is the meaning of the data clear? 6. Trusted: Who maintains control?

What is data integration?

the combination of business and technical processes that combine data from disparate sources into meaningful and valuable information

Why data replication?

the existence of fragment can enhance data availability and response time, reducing communication and and total query costs

Why use a surrogate key for dimension tables?

they are foreign key references that will substantially reduce the size of the fact table which will reduce warehouse size. The surrogate keys have no meaning outside of the database

Data Integration via ETL - how does the process transform?

transforms measured data into the same dimension using the same units so that they can later be joined. requires joining data, forming aggregates, etc. -it's job is to format data from source system to target system. data format is determined by who the end user is. Record level functions: -selection -joining field level functions: - single field - i.e. fahrenheit -> celsius - multi field - i.e. product code ->[Brand Name]+[product name]


Related study sets

Windows Chapter 5 Recheck: What did you learn about OneNote ?

View Set

Soci 100 - Homework - 4.1.Socialization: The Concept

View Set

47203W_02 Hazardous Materials Technician Final Exam Answers

View Set

II Lecture Chapter 18 Short Answer: Upper Face Surgery pp 400

View Set