Applied Algebra

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Graphs and Input-Output Pairs

A graph of a function visually describes the way the output variable depends on the input variable. Comparing the outputs at two separate inputs helps you to understand that relationship. In this lesson, you will learn how changes in the input variable cause changes in the output variable and how you can compare outputs based on different inputs and vice versa.

Superscripts

A number, letter, or symbol set slightly above the normal line of type, at or above the baseline.

Input

A quantitative variable that is chosen to find a solution, or output, of a function.

Output

A quantitative variable that is produced by a function when an input is chosen.

Intervals of Numbers

A range of numbers that include all the real numbers between its endpoints. You encounter numbers every day. A casserole recipe that feeds four might call for the casserole to be baked at 375°F. A bike tire might call for inflating the inner tube to 70 pounds per square inch (psi). But how specific are these numbers? Would your casserole turn out all right if the oven were at 370°F? 390°F? What about 150°F? Would your bike be rideable if the tire pressure were just 65 psi? What about 200 psi? As these everyday examples suggest, you often do not need a precise number, but rather a range of acceptable values, an interval. Perhaps appropriate baking temperatures are anywhere between 360°F and 390°. Perhaps your bike tires could be anywhere between 60 psi to 70 psi, including 60 psi and 70 psi. Look into the bike tire example a bit more. If your bike tires could work fine anywhere between 60 psi to 70 psi, you need a way of writing that mathematically. You can use a statement called an "inequality" to do that. For example, you can write the range of acceptable bike tire pressures as 60 ≤ x ≤ 70, where x represents the bike tire pressure. The inequality is saying that if x is any value between 60 and 70, including the values 60 and 70, then you are good. The less-than-or-equal-to sign (≤) means that values are okay as long as they are just that: less than or equal to the stated value. If for some reason, 60 and 70 were not acceptable values for the bike tire pressure but everything in between was still okay, you would instead write the inequality as 60 < x < 70, without the "or-equals-to" part. You would now be saying that 60 and 70 are not acceptable, but everything in between is. Writing out an inequality is just one way of denoting intervals of numbers in the real world. You can also graph them. For instance, the interval of all x satisfying 65 ≤ x ≤ 80 looks like this: As you may guess, sometimes intervals do not include their endpoints. For instance, say that federal income tax is 15% on incomes of at least $9,325 but less than $37,950. In other words, this tax bracket consists of all incomes x such that 9,325 ≤ x < 37,950. The left-hand endpoint is included, but the right-hand endpoint is not. To change this inequality to a graphical depiction, you use an empty point to denote an endpoint that is not part of the interval: Notice that you include 9,325 here, which is why that endpoint is filled in, or closed. However, you exclude 37,950 here, making that endpoint left empty, or open. We can express this in interval notation by listing the left and right endpoints separated by a comma. We use brackets to indicate that an endpoint is included. When we are excluding an endpoint, we use parentheses. In this example, our interval is [9325, 37950). What about when you need to exclude both numbers at the end of the intervals? For example, what about all numbers strictly between 0 and 1? Here, you are saying you cannot include 0 and 1, but any number in between them is fine. Denote this in interval notation by (0, 1). Notice that this is the same notation as you use for coordinates, so do not get those confused. The context of the problem you are looking at should resolve any confusion that might arise from this. Just in case you are wondering, below is the graph of the interval (0, 1).

Constant

A value (number) that does not change

Inequality

An expression that compares two numerical values when they are not equal. With inequalities, we use "less than": < or "greater than": > to exclude the endpoint of the interval

Tables and Graphs

Data are part of everyday modern life. Unfortunately, not all data are presented in the way you might need it. Maybe you have experienced this already, such as coming across a graph when you need information about a specific point, or looking at a table and wishing the author had graphed the information so you could understand the overall trends more easily.

Independent VS Dependent Variables

In general, think of the independent variable as the input and the dependent variable as the output. Stated another way, the independent variable's input influences what output, or outcome, you get from the dependent variable.

Table

In math, a logical arrangement of quantitative data in columns and rows. In any table, the left column is generally the independent variable.

Drawing Conclusions from Functions

In some situations, you can use a function to draw a conclusion about a situation. In this next situation, see how this can be done in a business setting to predict things like possible overtime for a particular week. Michelle is a manager at a business that processes health insurance claims. As part of her responsibilities, she has to project the number of hours the claims processors might work each week based on the number of claims to be processed. Michelle uses the function H(c)=0.02c to make these predictions, where H stands for the projected number of hours her processors will work to process so many claims, called c in this example. In a normal week, the claims processors have about 2,000 health insurance claims to process. To get an estimate of the projected hours for the week, Michelle uses the function above and sees that H(2000)=0.02(2000)=40. This means that to process 2,000 health insurance claims, Michelle should project that each of her claims processors need to work about 40 hours for the week. What if, for a particular week, Michelle's processors only have about 1,800 health insurance claims to process? It would be helpful for her to know what the projected number of hours, per claims processor, would be for this lighter week. Using the same function, she can calculate that H(1,800)=0.02(1,800)=36. In this situation, Michelle knows that each of the processors will only have about 36 hours of work. This can be helpful in making projections for overtime as well. Michelle uses the function H(c)=0.02c to make predictions for the number of hours her claims processors need to work in a week. In the function, H stands for the projected number of hours the processors work to process a particular number of claims, c. If Michelle has 2,500 health insurance claims to process for next week, what conclusion can she make about any overtime (working over 40 hours a week) her processors may be called on to do? Based on the projections, Michelle's processors will each need to work about 10 hours of overtime. The projected number of hours for Michelle's processors next week would be H(2500)=0.02(2500)=50. This means that Michelle can predict about 10 hours of overtime for each processor for next week.

Trends in Data

In modern society, you may well be called on to read and understand data. Data can be presented in many ways, and one challenge is to communicate data easily and with accessibility. This could mean representing data visually, which can be done by putting it into a graph or table. When distributing information on trends to members of the general public, graphs are generally the preferred method. They allow for quick examination of large amounts of data in a glance and can be understood by most individuals. Tables, however, are a preferred method of presentation when you are performing data analysis, since tables allow you to both see and interact with individual data points.

Interpreting Graphs and Functions Together

Interpret x-y-coordinates in context, including locations on a graph and function input/output and functions of qualitative and multiple variables. Tracy has been asked to present a new business concept to her boss, who is a visual person. She has done research, compiled data, and made input-output tables; however, Tracy's boss wants a visual representation in the form of a graph. How is Tracy going to take all her tables and turn them into graphs to make a presentation more visually appealing—and convincing—for her boss? In this lesson, you will see how to represent coordinates of a point as an ordered pair, how to write function notation, and how to write notation for multivariate functions.

Multivariate

Involving a number of different - though not necessarily independent - variables.

Rate of Change

May be average or instantaneous; describes how one quantity changes in relation to another quantity.

Average Rates of Change in Graph

Rick just received his annual savings account statement. He opened the account with only $250, and after six months he had saved $800. After six more months—a full year—he'd saved $1300. To find out how much he saved per month, Rick needs to calculate the average rate of change (or slope) per month. With this information, Rick can make predictions about his future savings. Rick knows how to do that and before long, you will know how, too. In this lesson, you will learn the formula for rate of change, how to use two specified values to calculate an average rate of change, and a key point about the rate of change for a linear function.

What is one way to remember what rate of change is? On another day, within 1 hour, Grace jogged 2 miles; within 1.5 hours, Grace jogged 2.5 miles. What's Grace's average rate of change (speed) during this 0.5 hours? Answer your question with units.

Rise over Run: Rate of change is the change in y-values over the change in x-values.

Output Values

So far, you have been comparing two output values for given input values. Sometimes you need to change this around, though, given the context of the problem you are working on. For example, a study of customer interaction with your business through social media shows that by promoting your posts, you reach more people. The study suggests that the total reach of your post, R, is a function of approximately how much you spend, s (measured in dollars), modeled by the function R(s)=100s+800, which is depicted in the following interactive graph. Suppose you want to reach 2,000 people. The question might be: How much do you need to spend to reach these 2,000 people? That might be a very helpful question for a marketing department. Keep in mind that you want to know how much you spend, s, when R = 2,000. Using the graph above, you can see the coordinate (12, 2000) is on the graph, meaning that if you spend $12, you should reach your target 2,000 people.

Plotting Variables on Graphs

Suppose you are writing code to search a large database and you want the search results returned quickly. You decide to use more computers to speed up the search (a process known as parallel computing). You run some tests with 1, 2, 3, and 4 computers doing the search, and observe the following data in the amount of time (in minutes) that it takes to run the search: In this example, the number of processors N is an independent variable, and the time t (in minutes) to complete the query is a dependent variable. It is typical to graph the independent variable on the horizontal axis and the dependent variable on the vertical axis, as in this example. You are asserting that changes in N cause changes in t, or that the variable t depends on N. Another example: Suppose you are producing and selling knit hats. It is typical (but not necessary) to graph an independent variable on the horizontal axis and a dependent variable on the vertical axis as shown. You set up a website, complete with photos of the finished hats. The question is how much to charge for each hat. From previous experience selling online, you know that if you raise the price you will sell fewer hats each month, and if you lower the price, you will sell more. That is, the demand for your product tends to decrease as the unit price increases. In this situation, the unit price p is an independent variable and the quantity q sold per month is a dependent variable. In other words, you are asserting that changing the unit price causes the demand to change. It is typical (but not necessary) to graph an independent variable on the horizontal axis and a dependent variable on the vertical axis as shown in the learning check's graph. The graph just pictured describes the Zillow Home Value Index z (in thousands of dollars) as a function of time t (in years). You can write this in function notation using z(t). This means you can write z(2016)=367 to communicate that the Zillow Home Value Index was about $367,000 for Portland at the start of 2016. It also means that the coordinate (2016, 367) is on this graph, which you can verify in the graph above. Notice that you may not have an algebraic formula for the function f; that is defined by the graph. There will be many times in real life when this happens. In these cases, you still use the same function notation. Also notice that the years are on the horizontal axis while the Zillow Home Value Index is on the vertical axis. This implies that the year t is probably the independent variable and the home value index z is probably the dependent variable. Sometimes you can define the relationship between two variables by the graph or by a function. The following applet is such an example that converts temperature in degrees Celsius (C) to degrees Fahrenheit (F). The function is F(C)=1.8C+32. Suppose you wanted to figure out 25⁰C in degrees Fahrenheit. You could move the slider above to C = 25 and you would see the coordinate (25, 77). This tells you that 25⁰C is equivalent to 77⁰F. You can also see this by using the formula above, F(C)=1.8C+32. For example, F(25)=1.8(25)+32=45+32=77. Either way, 25⁰C is equivalent to 77⁰F. To bring this back to independent and dependent variables, notice that the graph above the independent variable is intended to be degrees Celsius while the dependent variable is intended to be degrees Fahrenheit. You know this by the placement of each on the horizontal and vertical axes. It is no coincidence that the graph looks this way based on the original function, F(C)=1.8C+32. Even with the function, you input a value for C, like C = 25, and see what the corresponding F value is.

Matching Coordinates to Graphs

Suppose you run a coffee shop and you keep track of how many medium cups of coffee you sell each week, as well as the total revenue that you bring in from these sales. You could record this information as a coordinate pair (n, R), where n is the number of cups sold and R is the revenue (in dollars). After several weeks, you could graph these points and observe that the relationship between these two variables is linear, passing through the origin, and with slope equal to the price of a cup, $2.75 for a medium at your shop. Knowing the price per cup of coffee, you calculate that you make a revenue of $55 for selling 20 medium cups of coffee. Examine the following graph depicting this. Often it is important to measure two quantities and describe a relationship between them. Suppose you learn from your utility company that water costs $5.88 per 1,000 gallons. If you let x describe your water use (in thousands of gallons) and let y be your water bill (in dollars), then the values (x, y) come in pairs such as (1, 5.88), (2, 11.76), and (3.4, 20). In each case, y is 5.88 times as large as x, which you can write as the equation y=5.88x. What happens if you graph all the points with coordinates (x, y) satisfying the equation y=5.88x? You get a line, passing through the origin. The following linear graph informs you of the relationship between water usage and cost. Another familiar graph is the growth of money in a savings account as interest compounds. If a credit union offers you 2.5% annual interest, and you initially deposit $400, you can track the balance from year to year. Let the horizontal coordinate measure time since the principal deposit, and let the vertical coordinate be the balance at that time. The points (0, 400), (1, 410), and (3, 430.76) lie on the graph, which bends upwards as it goes to the right. Examine the following graph depicting this. Take the example (0, 400). This means that at time t = 0 (or when you opened the account), the balance was $400. In terms of the coordinates on the graph, (0, 400) means to go 0 units in the horizontal direction—nowhere left or right—and then up 400 units vertically. What about (1, 410)? This means that at t = 1 (or 1 year after opening the account), the balance was $410. This is because your $400 has gained 2.5% interest (or $10), so you would have $410 at the end of the first year. In terms of the coordinates on the graph, (1, 410) means to go 1 unit right horizontally, since 1 is positive, and then up 410 units vertically. You can interpret (3, 430.76) in a similar fashion. At the end of the third year, you would have $430.76 in the account. You will also see the coordinate (10, 512.03), meaning that at the end of the tenth year, you would have $512.03 in the account.

Order of Operations

The conventional rules for solving math expressions: 1) Parentheses and brackets from the inside out; 2) Exponents; 3) Multiplication and division from left to right; 4) Addition and subtraction from left to right.

Run

The distance a line goes right (or left) on a graph, along the x-axis.

Rise

The distance a line goes up (or down) on a graph, along the y-axis.

Moore's Law

The observation by Gordon Moore in 1965 that the number of transistors on a silicon chip doubles periodically, which means that computing power also doubles; this is not a "law" in the conventional sense, rather more like a theory.

On the Spot Review: Interval Notation

The trend in this data table can be communicated in interval notation as [2060, 2085]. This simply states, in abbreviated terminology, that the years 2060 to 2085 are included here. If these values were placed in parentheses rather than brackets—that is, (2060, 2085)—it would communicate an intent to examine the years between 2060 and 2085, excluding the "bookending" years. This means that all the years from 2061 through 2084 would be part of the data. One last thing to keep in mind—as a standard of practice, you do not usually include the bookend years when describing increasing and decreasing trends over time.

Variable

What is a variable? Simply said, it is something that can vary. Variables are letters that represent values that change frequently, and variables are often used in mathematical expressions and in careers.

Interpreting Function Inputs and Outputs

When examining data, it is important to be able to switch from one format to another. Whether you start with a set of (x, y) coordinates, a function in input-output notation, a graph, or even a written paragraph, there are times when you have to rewrite the data in a different form of expression. This is an important skill in most industries where data is used, since data representations serve different needs depending on the situation. In this lesson, you will first convert from one data format to another and practice your newly acquired skills with real-world examples. Then, you will learn to interpret inputs and outputs in context of real-world examples. You will also distinguish between qualitative and quantitative variables and describe functions involving both, even when there are three or more variables in play.

Interval Notation

When you run a business, you have to keep track of income and expenses so that you can look at the difference of the two; namely, profit. But this quantity is sometimes negative (when costs outpace revenue, especially early in a company's life). Often profit is expressed as a function of quantity of goods sold or services rendered, in which case you would be interested about what quantities yield a positive profit. Alternatively, you might look at profit over time and be interested in knowing during which time intervals it was positive. On a hot summer afternoon, Zini decides to open a lemonade stand. She sells a tall glass for $0.50. If x represents the number of glasses of lemonade that she sells, and R represents the total profit that she gets from the sales (in dollars), then R(x)=0.5x describes the relationship between these two quantities. Here x is the independent variable (glasses of lemonade sold) and R is the dependent variable (the revenue in dollars). On a hot summer afternoon, Zini decides to open a lemonade stand. She sells a tall glass for $0.50. If x represents the number of glasses of lemonade that she sells, and R represents the total profit that she gets from the sales (in dollars), then R(x)=0.5x describes the relationship between these two quantities. Here x is the independent variable (glasses of lemonade sold) and R is the dependent variable (the revenue in dollars). In the language of functions, x is the input and R is the process to get the associated output. Sometimes, you do not want to deal with the function and would rather work with the graph associated with the function. To get a graph of a function, you look at the corresponding coordinate pairs (x, R). For example, Zini would make $1.50 if she sold 3 glasses of lemonade—this is R(3)=1.50, which is just saying that 3 glasses of lemonade will result in a revenue of $1.50. As a coordinate pair, R(3)=1.50 is represented by (3, 1.50). This is because you typically put the independent variable first and the dependent variable second. Another way to say this is that you typically put the x-value first and the y-value second. If you calculated more input-output pairs by plugging numbers into the function, you would get a graph like the one below. Notice that R(0)=0 here because Zini will make no revenue if she sells no lemonade. Notice that the outputs (the y-values, representing revenue) are all positive here. This means that Zini makes revenue on each sale, which makes sense. However, when does she actually make a profit? After some analysis, Zini discovers she makes about $0.30 for each cup of lemonade she sells (so each cup costs her about $0.20 to produce). She also knows that she had to buy lemons and sugar to get started, which cost about $5. This means that Zini's profit function is P(x)=0.30x-5. Coming up is an interactive graph of this function. Notice that Zini has not made any profit to start out, so she starts out with a negative value on her profit. But as Zini starts selling lemonade, the profit becomes less negative and she starts heading towards making a real (positive) profit. See if you can work with the interactive graph to see when Zini starts making her real profit. (Hint: Check out the profit around the sixteenth and seventeenth glass of lemonade sold.) All this means that you can describe Zini's profit as negative on the interval [0, 16] and positive on the interval [17,∞). It is probably unrealistic to use ∞ in our interval here since Zini will not be able to sell an infinite amount of lemonade in her neighborhood, but mathematically what you are trying to communicate is that it is all positive profits for Zini after she sells that seventeenth cup of lemonade (or until she runs out of lemonade to sell). It is important to be able to communicate observations from a graph. What you will learn in this section is how to communicate intervals of growth and decline from a graph. Consider the following graph of U.S. Census data for the population of Cleveland, Ohio, over the past century and a half. Notice that the city grew consistently up through 1930, then experienced a slight dip, then grew to almost a million in 1950, before experiencing a rather steady decline. The population growth was positive on the intervals [1840, 1930] and [1940, 1950] and the population growth was negative (that is, it declined) on the intervals [1930, 1940] and [1950, 2010]. In this example, you are not worried about whether the endpoints of the intervals are included or excluded. It is hard to know if the population was growing or shrinking in these exact years because census data is only collected every ten years.

Data Today

Whether you realize it or not, data—statistical values used to analyze, describe, and understand patterns or trends—are everywhere in society. Every day, opinion polls are presented on social media, public policy data are distributed on the news, and statistics are offered to bolster advertisements. The challenge is not in finding data, but rather in learning how best to evaluate and understand it. Begin with the following table: In any table, the left column is generally the independent variable. The independent variable should account for any variation observed in the dependent variable(s), usually put in one or more columns to the right. (One independent variable can sometimes predict the response in several dependent variables.) However, you cannot always rely on the left column being the independent variable. Think about how you would determine which variable is the independent and which is the dependent without the help of a pre-formatted chart. In the case above, you have two variables: Year and Unemployment Rate. In one situation, you might be interested in seeing how the unemployment rate varies over the years. In this case, years are not affected by the action of another variable, so Year is independent. The variation in Unemployment Rate occurs in response to events happening over time, so in this case, Unemployment Rate is the dependentvariable. In another situation, though, you might be interested in keeping track only of the years when unemployment was over 5% (since these years represent harder economic times). In this context, it is the unemployment rate that explains or predicts the values you are interested in: the years. This time, the Unemployment Rate is the independent variable and the dependent variable is Year. You just need to be sure you understand the context of the problem before you can definitively say which is the independent variable and which is the dependent variable. Another example of data organized by independent and dependent variables is the increase in computer speeds over the years. Moore's Law holds that the speed of an average computer approximately doubles every three years. Examine the following table to see the processing speeds of chips from 1978 to 1995. With Moore's Law, Year is the independent variable, and Processing Speed is the dependent variable. Notice that Processing Speed increases as a function of time passed. You can also see that the processing speed roughly doubled every three years or so. (More often than not, you will find that time is the independent variable when presented in a data set.) As you can see, it is vital to recognize the conclusions that can be made simply by examining a table. Now review the following data from U.S. News and World Report regarding student debt: This table presents a lot of information. Notice the trends and think about what they mean—take each column separately and see what you notice in the data trends.

Commutative

a+b=b+a involving the condition that a group of quantities connected by operators gives the same result whatever the order of the quantities involved, e.g., a × b = b × a.

Ron's stock portfolio was worth $100,000 at the beginning of the 1st month and $108,220 at the beginning of the 6th month. What is the average rate of change during those months? In the middle of the 7th month, Ron's portfolio was worth $120,400 and the function's instantaneous rate of change was −$6,800 per month. How would you interpret this instantaneous rate of change? In the call-center scenario, if the function passes (22, 135.6) and (30, 122), calculate the average rate of change from the 22nd to the 30th day since the product's release.

$1,644 per month because $\frac{108,220-100,000}{6-1}=1,644$108,220−100,0006−1=1,644. At that moment in the middle of the 7th month, Ron's portfolio was projected to lose about $6,800 in value. Ron's actual loss might be more or less than this amount since the instantaneous rate of change only tells us how Ron's portfolio is changing in that moment in the middle of the 7th month. 1.7 calls per day because $\frac{122-135.6}{30-22}=-1.7$122−135.630−22=−1.7

To summarize asymptotes so far:

- Asymptotes occur when the y-values of a function tend toward a specific value as the x-values get large or small. - Linear functions never have asymptotes. - Polynomial functions never have asymptotes. - Exponential functions always have one asymptote that occurs toward either the positive x-values or toward the negative x-values. - Logistic functions always have two asymptotes, one will occur toward the positive x-values and the other toward the negative x-values. (You will learn about logistic functions in the next unit of the course.) There are other functions besides exponential and logistic functions that also have asymptotes. In general, the principles you are learning in this lesson about asymptotes will apply to any asymptote you may encounter. That said, where else might you run into asymptotes in real-life scenarios? Traveling is a good example; it has natural asymptotes. Say you are traveling 150 miles on a high-speed train. Naturally, the faster the train goes, the shorter the amount of time it takes you to get where you are going; conversely, the slower the train goes, the more time it takes you to arrive. Here is a graph that shows the travel time (in hours) versus the speed of the train (in miles per hour). Do you notice any asymptotes here? In the case of the total travel time of a high-speed train, the asymptote is created by the average speed of the train. As the train travels faster and faster (that is, as the x-values get larger), the time the train travels (represented by the y-values) tend toward zero. Even if the engineer pushes the train to go faster and faster, the travel time will never be zero. This is the "natural limitation" in this scenario. Graphically, you can see that the function appears to be flattening out, creating an almost horizontal line. This horizontal line is a hallmark sign of an asymptote. Here is another example: Computer peripherals (keyboards, mice, etc.) have been getting cheaper the last few years, compared to prices in the past. In general, computer equipment depreciates (loses value) over time, and depreciation has decreased by about 10% per year since 1999. The following graph shows how $700 in computer equipment would have depreciated since 1999. Notice that the value of the computer equipment (the y-values) are tending toward zero as time goes on (that is, as the x-values get larger). This also makes sense. While the equipment might not ever be worthless, the value of the computer equipment certainly does go down over time. Which situation describes an asymptote? Your height as time goes by Even as more time goes by, your height will (or has) reached a natural maximum. In this situation, your own body puts a natural limitation on the response variable (height). Examine each graph. Which graph does not seem to have an asymptote? This graph does not seem to have an asymptote in either the positive or negative x-direction.

Estimate the coefficient of determination in the following regression: Estimate the coefficient of determination in the following regression:

0.04 It is hard to observe a pattern in this data. It is fair to conclude the explanatory and response variables are not well correlated. 0.85: With only a few outliers, the independent and dependent variables are still highly correlated.

According to the following graph, how would point A affect the regression line's slope and its y-intercept? According to the following graph, how would point B affect the regression line's slope and its y-intercept?

1) Point A is located higher than the other points so it causes the line's y-intercept higher. It pulls the left part of the line higher, making the line's slope steeper. Since the line's slope is negative, the slope actually becomes smaller (like from -1 to - 2). Point B makes the regression line's slope smaller and its y-intercept higher. Point B pulls the right part of the line lower, making the line's slope steeper. Since the line's slope is negative, the slope actually becomes smaller (like from -1 to -2). As the right part of the line becomes lower, its left part becomes higher, causing a larger y-intercept.

1. In which year did Retreat Spa's market share reach 30%? 2. In which year did Sunrise Sky Spa's market share reach 30%?

1. 2006 2. 2008

1. Examine the following scatterplot depicting the percentage of people affected by the virus. 2. Which statement is correct about a function of best fit? 3. The following exponential model h(x)=0.9856e0.1887x predicts the percentage of people infected by the virus over time, x, measured in days. 4. Which of these statements is true about coefficients of determination? 5. Examine the following function to solve the next problem. What is a reasonable estimation of the coefficient of determination of the function in this graph?

1. About 30% The data follows a logistic function's model, and will approach a certain maximum. 2. A function of best fit helps you know the behavior of data in the near future. 3. The prediction would be 12,339%, which is impossible. Since exponential models keep increasing forever, they are not a good function to model this scenario. A logistic model would be much better. 4. If a function of best fit has a coefficient of determination of 0.95, it implies a strong fit of the data. If the coefficient of determination is above 0.7, it is considered a strong fit. 5. ! The data set and the function are not a strong fit. 0.5

In this section's function on human population, the instantaneous rate of change at (900, 1.34) is 0.02. How do you interpret the point's coordinates? In this section's function on human population, the instantaneous rate of change at (900, 1.34) is 0.02. How do you interpret the instantaneous rate of change?

1. At the beginning of 1900, human population was 1.34 billion 2. At the beginning of 1900, human population was increasing at the rate of 0.02 billion, or 20 million, per year..

1. Greg is researching the number of U.S. households with cable TV from 1980-2000. The number could be estimated using this formula (in millions of households): f(t)=120 1+3.5e−0.25t where t is the number of years since 1980. How many homes had cable in 1980? 2. Greg is researching the number of U.S. households with cable TV from 1980-2000. The number could be estimated using this formula (in millions of households): f(t)=1201+3.5e−0.25t where t is the number of years since 1980. How many homes had cable in 1990? Round your answer to a whole number. 3. Vincent is examining the sales of a video game at his shop in the days following its release. The sales follow a pattern represented by this function: S(t)=535 1+800e−0.3t , where S(t) is in units of game, and t is the number of days since the game's release. What is the maximum number of games expected to be sold at his shop? 4. Vincent is examining the sales of a video game at his shop in the days following its release. The sales follow a pattern represented by this function: S(t)=535 1+800e−0.3t , where S(t) is in units of game, and t is the number of days since the game's release. How many units of the game are expected to be sold by the 15th day?

1. Correct! f(0)=1201+3.5e−0.25(0)=1201+3.5(1)=1204.5=26.67. 2. Therefore in 1990, about 93 million homes had cable. 3. 535 games because the numerator of S(t)=535 1+800e−0.3t S(t)=535 1+800e−0.3t is the function's maximum value. 4. 54 games because

1. Consider Scenario 3 for pen sales, depicted in the following graph. How do the number of pen sales and rate of change for Weeks 20 and Week 400 compare? 2. A sales analyst at Period Pens must recommend whether the new gel pens will be a successful long-term venture for the company. Can they use the data and rate of change from Week 20 to predict what will happen? Why or why not?

1. In Week 20, there were about 31,420 pens sold and the rate was increasing by 1,000 pens per week. In Week 400, there were about 6,100 pens sold and the rate of change was 0. While the rate was increasing in the short term, it ceased in the long term. The number of pens and the rates needed to be multiplied by 1,000 to get the correct values. 2. The analyst cannot use the data to predict what will happen. The rate of change is for that specific instant in time and there is no guarantee that the sales trend will continue at that or any increasing rate.

1. Pinnacle and Regis are comparing their results for the 2012 calendar year to see how they fared at reducing the number of dial-up customers. Recall that the functions for the two companies are P(t)=121+0.23e0.3t and R(t)=11.51+0.23e0.4t. Which company had a steeper average rate of change for the 2012 calendar year? Try calculating the average rate of change by hand by substituting t=12 and t=13 to find points for each function. 2. Using the graph, visually estimate which company did a better job at reducing the number of customers with dial-up.

1. Pinnacle had a steeper average rate of change for the 2012 calendar year. Its average rate of change was -0.3, while Regis's was -0.13. The coordinates for Regis are (12, 0.4) and (13, 0.27), which give a slope of -0.13. The coordinates for Pinnacle are (12, 1.27) and (13, 0.97), which give a slope of -0.3. 2. From t = 0 to t = 6, the average rate of change for Regis is steeper, meaning that Regis was decreasing its number of customers with dial-up faster. Confirm in the following that the average rate of change for Regis was indeed steeper than Pinnacle's.

Find the rate of memory use at 1:00 p.m. on the day in question. Find the rate of memory use at 8:30 p.m. on the day in question.

1. Since R(13)=46, the rate of memory use is approximately 46% at 1:00 p.m. When t = 13, the function's value was approximately 46, which means 46% of the server's memory was being used around 1:00 p.m. 2. Since R(20.5)=70, the rate of memory use is approximately 70% at 8:30 p.m. When t = 20.5, the function's value is approximately 70, which means 70% of the server's memory was being used around 8:30 p.m.

1. Find when the number of users reached 1,000 2. Find when the number of users reached 2,000 in the following applet.

1. The function passed (6.4, 1), implying the number of users reached 1,000 at 6:24 a.m. The function passed (6.4, 1) to indicate the time at 6:24 a.m. 2. 8:24 a.m. The answer 8.4 hours since 0:00 a.m. is also acceptable.

1. When Upscale Nest has a seasonal period of reduced revenues, management needs to make sure there is enough cash in reserve to keep the company going. According to the following graph, when might be the optimal time to put funds in reserve for Upscale Nest, and what is the concavity of the curve during this optimal time? 2. First, look at the segment from A(0, 0) to B(2.5, 5). In this segment, the function is increasing, implying that the application was using more CPU resources. The function is concave up, implying also that the use of CPU resources was increasing faster and faster. Johan does not like this concave-up situation because if the CPU usage increases faster and faster, that could mean that the server will crash. What about the other concave-up situation, where the usage is decreasing slower and slower? This would be a better situation for Johan since his CPU's usage would at least be decreasing. What would Johan think about a situation involving concave down? That would mean one of two things: Either the function decreases faster and faster or it increases slower and slower. Either of these would be good from Johan's point of view. If CPU usage decreased faster and faster, resources are freed up. On the other hand, if CPU usage increased slower and slower, that could also mean that the program is about to free up some resources. Of these two situations, the decreasing faster and faster option is definitely better, but they both are positive situations for Johan. Therefore, overall, he would prefer concave down in this situation. Would Johan rather see a concave-up or concave-down segment from B (2.5, 5) to C (5, 10) in the following graph? 3. Healthcare prices have been on the rise in the United States for decades. Given that it seems that healthcare prices will simply continue to increase, would it be better for patients if healthcare prices were concave up or concave down? Why?

1. Upscale Nest would likely put cash in reserve during peak revenue times, which occurs during a concave-down portion of the graph. 2. During peak times, there is more money at Upscale Nest, which means this is the easiest time to put money aside. This time also occurs during a concave-down portion of the graph, like the peak around t = 11. This would mean resources might be freed up soon (increasing slower and slower) as opposed to more and more resources being used (increasing faster and faster). 3. Concave down would be better; even though healthcare prices are increasing, they could do so at a slower and slower pace, which would be concave down. Even though healthcare prices are increasing, they could do so at a slower and slower pace. Increasing slower and slower is a concave-down situation and the best solution here.

1. Find Retreat Spa's market share at the beginning of 2010. 2. Estimate Sunrise Sky Spa's market share at the beginning of 2010.

1. Using R(10)=49, Retreat Spa had 49% market share at the beginning of 2010. R(t) passes (10, 49) 2. About 35%

1. Compare the short-term and long-term instantaneous rates of change for weeks 2 and 28 for Zonkers in the following graph of Scenario 4. Which instantaneous rate of change is better for the Clever Apps company? Why? 2. In another Zonkers' scenario, there were approximately 120 downloads in Week 1 and they were increasing at a rate of 18 per week. What can you predict about the long-term rate at Week 10?

1. Week 2's instantaneous rate of change is better, because it is positive. 2. You cannot predict anything for sure about the long-term rate at Week 10. You would need to know what happened over the following 9 weeks as well as the instantaneous rate of change at Week 10 to see what happened.

1. Which logistic function in the graph has a higher k value in f(x)=L1+C×e−kx? 2. Which logistic function in the graph has a higher L value in f(x)=L1+C⋅e−kx?

1. f(x) has a higher k value. 2. g(x) has a higher L value.

1. For f(x)=2,0001+20e−0.1x and g(x)=2,0001+30e−0.1x which function starts to grow earlier? 2. In this section's scenario, if Maria must have 50 PCs installed to accommodate new hires by the 10th day after installation starts, which company should she choose, Fast PCs or Express PCs?

1. f(x) starts to grow earlier since the C=20 for $f(x)$f(x), whereas for $g(x)$g(x), C=30. 2. She should choose Fast PCs since Fast PCs will have 50 PCs ready in 8 days compared to 17 days for Express PCs.

What is the upper limit of f(x)=1001+20e−x+20? What is the lower limit of f(x)=1001+20e−x+20?

120 20

Another Common Function: f(x)=mx+b

A basic common function; it means that the function of x is the product of a constant multiplied by a variable input, x, plus another constant, b. Recall that a real estate agent's earnings were based solely on commission. But many jobs are paid as "base salary plus commission." This second common function can be modeled by: f(x) = mx +b, where m and b are constants (that is, numbers that do not change) and x is the input variable. For example, a software sales representative makes a base salary of $40,000 a year, plus 10% commission on her sales; her annual earnings are modeled by this function: E(x)=40,000+0.10x, where E represents her earnings and x is the amount of sales. Or consider this example: The cost to mail a package is a flat fee of $2.25 plus $0.08 for each ounce, so the cost, C, for mailing a package that weighs z ounces is given by the function: C(z)=0.08z+2.25

Low Coefficients of Determination

A beta test version of Instinct Fighters: The Sequel started in May. Because people were invited to the beta test version in large batches, the number of gamers changed by larger margins every day. The following scatterplot also has a regression line. The coefficient of determination is (0.61)2=0.3721. This implies that there is a moderate correlation between the number of gamers and the number of days after the beta test started. In other words, there is a general trend of increasing number of users, but predictions based on this function are expected to be somewhat larger or smaller than the real value given the large changes in the number of online gamers shown in the data. However, this regression model is still the best linear function possible for predicting future values. Since the least-squares regression algorithm is used, no other linear function would have a higher coefficient of determination. While more data is being collected, Maria can rely on this regression function to make predictions with moderate reliability. This next scatterplot displays an exponential regression function for June's data for Instinct Fighters 2. Here, r2=0.1225 implies that the number of gamers and the number of days are not well correlated. It implies that there is a weak fit between the model and the data. In other words, predictions based on this function are expected to be larger or smaller than the real value. Again, this is because of the large changes in the number of online gamers shown in the data. Since this is such a weak model, more data should be collected, and a better regression model calculated with the new data. Until that is available, Maria can rely on this regression function to make very weak predictions with this current model (meaning there is not much of a point).

Real-world applications of linear functions

A car is equipped with both a speedometer and an odometer. The speedometer measures how fast your car is going while the odometer measures how far your car has been driven. Some cars even have "trip odometers," which measure the distance driven on trips. These two meters on your car have something in common with lines. You will learn about that connection in this lesson and use these two meters to more intuitively understand how lines work. In this lesson, you will explore real-world applications of linear functions. You will also learn to calculate the value of a function based on a specific input, see how a function's formula determines its graph, and estimate the input and output values of a function from its graph, even without the algebraic formula itself.

Limiting Factors

A circumstance that tends to prevent an activity or quantity from expanding or changing.

Data

A collection of facts, which might be numbers, words, measurements, observations or simply descriptions.

Possible Outliers

A data point that "lies" significantly "outside" the general trend in a set of data; it could prove to be a true outlier that should be removed or it could be legitimate data that should remain in the data set Now that you have seen how regression models work, you will learn how outliers are identified and handled in regressions. Remember Youth Again? It is the mall management company that opened a new shopping center. The following graph shows the number of customers who visited the center on the first day of each month, with a curve of best fit and corresponding correlation coefficient. There is a possible outlier here, though; namely, point I. Visually, you can see why Point I is a possible outlier. It is a data point very far away from the rest of the data. While it is normal to have possible outliers in a data set, they can negatively affect the calculation of any models based on the data. Also, possible outliers are true outliers when the point lies away from the general trend of the data for reasons beyond the scope of the question the data is intended to solve. In the data above, for example, the shopping center closed early due to a power outage on September 1, resulting in fewer-than-normal shoppers. Since the power outage is something beyond the scope of the number of shoppers that Youth Again expects, that makes Point I a true outlier. What exactly do possible and true outliers do? The least-squares regression algorithm considers all data points and finds the function that fits best. For this Youth Again data set, that means Point I greatly affected the curve of best fit by "pulling" the curve toward it, which then pulled the curve away from all of the other data points. Notice how the line dips below points G, H, J, and K. This gap between the curve and these points are due to Point I. Point I also caused the right side of the curve to be higher than it would otherwise be. If you use this function to predict data in the future (that is, values beyond point M), you would overestimate those values. For example, f(13)≈3500 means you might expect 3,500 shoppers on January 1 of the new year, but this value is actually overestimated since point M is below this value. This shows that the outlier is causing predicted values to be far off from the true values. Outliers also impact the coefficient of determination, always decreasing it, causing a worse fit when outliers are present. Due to the gaps created by Point I, including the gaps between the curve and a few points above it, the regression's coefficient of determination became lower than if Point I did not exist. There are actually many other ways that outliers distort results, as well. So what is done with them? Because of the nature of the outlier—a power outage—it is appropriate to remove the outlier from the set in order to get a better regression. This means that a new regression would get a better idea of what generally happens at the mall. That is, it would disregard the rare occurrence of a power outage. The next graph shows the result with the outlier removed, the resulting new regression function, and the coefficient of determination. With the outlier removed, the function changed to g(x), which has different coefficients compared to f(x). Notice how this next graph differs from the previous one. There are smaller gaps between data points and the curve, creating a higher coefficient of determination, r2=0.98. The right side of the curve became lower, creating smaller values than f(x). You should trust values predicted with g(x), because it is created with an outlier removed. Now when you look at g(13)≈2900, you see that the prediction is for 2,900 shoppers on January 1 of next year, which is much closer to point M. Also, the value predicted by the new function is smaller than f(13)≈3500, which was calculated with the outlier in the data set. In the scenario involving Youth Again's new shopping center, how did the outlier affect the coefficient of determination? - The outlier made the coefficient of determination smaller. In the scenario involving Youth Again's new shopping center, how did the outlier affect the predicted value on January 1 of the next year? - The outlier caused an overestimation of the value on January 1 of the next year.

Outliers and Dog Food

A data point that "lies" significantly "outside" the trend in a set of data. In a scatterplot, a value that "lies outside," or is much smaller or larger than most of the other values in a set of data, is called an outlier, and it can change the best-fit line and the correlation coefficient of the data by a great deal. Consider the following scatterplot, which looks at the situation just discussed. There is a very low value associated with the month of September. Canine-ivore did not sell very much that month, reaching only about a third of its usual monthly sales. As a result, the best-fit line (the red solid line) does not fit the scatterplot very well. It misses September's sales by a mile, and it is below all but two of the other data points. Its equation is y=-34.59x+3807.09. That slope alone, given the slight upward trend of the data excluding the month of September, is remarkably steep and negative. The correlation coefficient of the data is only -0.1749, which is a very weak correlation. If the data associated with September is removed from the scatterplot, things change a great deal. Here you see that the best-fit line appears to go through almost all of the points; though hypothetically if you zoom in, you will see that it is only very close to them. Its equation is y = 8.98x + 3731.56. The correlation coefficient suddenly becomes 0.9768. The line is almost right on top of the data points. The change is dramatic. As you can see, a single outlier can change the analysis of the data tremendously. In the case of this problem, maybe the company that made Canine-ivore suffered a mishap and could not produce enough food to meet sales numbers. By removing that outlier from the data set, when it is justified to do so, things become more like what you expect. Which point is the outlier? H stands well out from the rest of the points and far from the best-fit line. If the outlier were removed from the scatterplot, how would the best-fit line be most likely to change? The position of the outlier so far above the rest of the points pulls the best-fit line upward. By removing the outlier, the best-fit line would tilt more steeply downward. Summary Lesson Summary Sometimes one or two data points are far out of line with the rest of the data, and they can affect the best-fit line and the validity of your conclusions. Here is a list of the key concepts you learned in this lesson: Outliers cause changes in both the best-fit lines and correlation coefficients of a scatterplot. Removal of the outliers from a plot, when justified, will give a better fit line, assuming that the other points on the scatterplot are more or less linear.

Common Era

A designation previously called "A.D.," meaning annus domini, "year of the Lord"; the Common Era dates year 1 as the presumed year of Jesus of Nazareth's birth.

Graph

A diagram showing the relationship between variable quantities on two axes drawn at right angles.

Graphs

A diagram showing the relationship between variable quantities on two axes drawn at right angles.

Profit

A financial gain when the amount of revenue from a business activity exceeds the expenses, costs, and taxes needed to maintain the business activity.

Function

A function is useful because it expresses the relationship between two quantities, where one quantity, called the output, is determined by the value of another quantity, called the input. A relation based on a set of inputs and a set of possible outputs where each input is related to exactly one output. Many real-world events can be modeled by functions. Functions are important building blocks for understanding things like economic production of goods, financial analysis, population growth, and even the spreading and curing of diseases. In computer science, the phrase "garbage in, garbage out" expresses the idea that in programming, incorrect data or poor quality input always produces faulty output, or "garbage." The best part is that you can use functions to predict future and past values. Function notation uses the input and output variables to show the relationship and is written f(input) = output, where variables are used for each quantity. For example, predicting revenue is important in business. Revenue is based on how many units of a product are sold. If your company sells wireless speakers for $60 each, then the expression R(n)=60n models the revenue, R, when n speakers are sold. The number of speakers sold is the input, and revenue is the output. Consider this example: As an IT manager, you are responsible for upgrading the computer systems for your company. A new software package might cost a flat fee of $87.50 plus $2 per computer to install. The cost for x computers can be calculated with the following function: C(x)=2x+87.50

Function of Best Fit

A function that best represents given data points.

Exponential Function

A function that has an extremely rapid increase or decrease in quantity in proportion to its current value; its form is f(x) = b^2.

Linear Functions

A function that produces a straight line (linear means "straight"), indicating a constant increase or decrease.

Logistic Functions

A function that produces an s-shaped curve and contains two asymptotes.

Inverse Functions

A function that undoes, or reverses, the action of the another function. If you have ever taken off your socks, untied a knot, or used multiplication to check division, you have used an inverse function. Inverse functions are basically functions that "reverse" each other. Whatever the function does, its inverse undoes. Therefore, whenever you undo something, you are using an inverse function. To find the input and output of the inverse function, you simply swap the input and output of the original function. Remember the example above about the conversion of temperatures from Celsius to Fahrenheit? You can undo that and go from Fahrenheit to Celsius, too. When would this be useful? Say your company sells frozen food items internationally and needs to keep them at a safe temperature during shipment. You may need to convert from Celsius to Fahrenheit when reviewing international purchase orders, but people in other countries may need to do the inverse and convert from Fahrenheit to Celsius. Here is another example. When you want to call someone, you may have to look up a phone number in a phone directory. This is a function because each name corresponds to one phone number; the name is the input while the phone number is the output. Caller ID is essentially the inverse function; it takes the phone number as an input and outputs the associated name with that phone number. Finally, think back to the example about an IT technician ordering a new computer for every employee in the office. The total cost of the order is related to the number of people employed. As you learned, in this context the number of employees was the independent variable, or the input, and cost was the dependent variable, or the output. But suppose the company has cost constraints and only has a limited budget for new computers. Now money becomes the input used to find how many employees are able to get a new computer. These examples show that both the function and the inverse functions can be useful in different ways, depending on the situation. For an additional example, look back at the gallons of gas (g) versus cost (C) functions. Those are inverse functions of each other as well. Keep in mind how that is expressed: they are inverses of each other. That means it really does not matter which one you call the inverse function since they really undo each other. Are you still using the last app you installed? Did you know that most people download an app, use it once or twice, and then forget about it? If only 30% of people continue to use an app after downloading it, the function can be modeled, or represented, by: R(a)=0.30a, where R represents the number of returning users and a is the number of people who download an app. This same function, R(a)=0.30a, can be used to model other—in fact, multiple—real-world scenarios. For example, maybe 30% of children miss one day of school each month in a particular town. In this lesson, you will see how a given function can be used for multiple real-world scenarios and how learning about such functions can save you time and effort.

Polynomial

A function with real non-negative numbers - constants, variables, and exponents - that can be combined using addition, subtraction, multiplication and division. The word comes from poly- (meaning "many") and -nomial (meaning "terms").

Line Graph

A graph that displays data points connected by segments of straight lines.

Asymptote

A horizontal, vertical, or slanted line on a graph that a curve approaches but never touches.

Exponent

A mathematical notation - a superscript - that defines the number of times a number is multiplied by itself.

Equation

A mathematical statement that two things are equal; it consists of two expressions, one on each side of an equals sign.

Interpolation

A method of inserting new data points in the range of a known set of data points. ("Inter" means among.)

Extrapolation

A method of inserting new data points outside the range of a known set of data points. ("Extra" means outside.)

Subscripts

A number, letter, or symbol set slightly below the normal line of type, at or below the baseline.

Ordered Pair

A pair of numbers used to locate a point on a graph, written in the form (x, y) where x is the x-coordinate and y is the y-coordinate.

Trends

A pattern of change in a condition or process, often represented by a line or a curve on a graph.

General Trend

A pattern of gradual change in a consistent direction, represented by a line or curve on a graph.

Graphs and Intervals

A plumber says he will visit your apartment between 10 a.m. and noon. A meteorologist calls for between one and two inches of snow. A pollster predicts that a candidate has between 35% and 42% support from the electorate. All of these everyday examples describe intervals of numbers. Although in mathematics numbers represent precise points on a number line, you often want to describe a range of numbers, or a segment of the number line. Intervals are defined by where they begin and end, and also by whether they include the endpoints. In this lesson, you will see why intervals of numbers are so useful, how they are represented on a number line, how to use correct notation to indicate if endpoints of an interval are or are not included, and how intervals can describe where a function is increasing or decreasing.

Estimate the inflection point in this graph. Is the function increasing faster and faster or slower and slower from point A to B?

A point close to (4.2, 160). From point A to point B, the function is increasing slower and slower.

Inflection Point

A point on a curve at which the curve's concavity changes; also called "point of inflection."

Which regression model should you choose for the following data set? Why?

A polynomial function should be used to model this data because the data matches the shape of a third-degree polynomial function.

Regression Function

A proposed function to model a data set, based on a regression analysis.

Interval

A range of numbers that include all the real numbers between its endpoints. With interval notation, we use use round parentheses, ( or ). Intervals of numbers are represented in writing by listing the pair of endpoints enclosed in either parenthesis (if the endpoints are not included) or in brackets (if the endpoints are included). Intervals are used to represent mathematical statements involving inequalities. Intervals can be used to describe where a function is positive or negative (or where it is increasing or decreasing).

Mappit and Rate of Change

A rate of change is the amount that an output y changes for a certain input or set of inputs x. A rate of change for distance is miles per hour. This can be the rate of change for a moment, like when you look down at your speedometer. It can also be the rate of change for a period of time, like when you drive 200 miles in four hours. You may have already looked at polynomials and their rates of change. If you did, you saw two different types of rates of change: instantaneous rate of change and average rate of change.

Average Rate as a Ratio

A ratio is a statement of how two numbers compare. It is a comparison of the size of one number to the size of another number. For example, if there are 8 boys and 2 girls in a class, the ratio between boys and girls is 8/2=4/1. If another class has 16 boys and 4 girls, the ratio between boys and girls is 16/4=4/1. Although those two classes have different number of boys and girls, the ratio between boys and girls is the same. The rate of change is actually a ratio of the vertical and horizontal changes between two points. For example, if Grace jogged 2 miles in 0.5 hour, and she jogged4 miles in 1 hour, those two points are (0.5, 2) and (1, 4). The vertical change of those two points is 4−2=2, and the horizontal change is 1−0.5=0.5. Their ratio can be calculated by the rate of change formula: rate = y2−y1x2−x1=4−21−0.5=20.5=4mihr. Grace jogged 1 mile in 0.25 hour. If you choose to calculate the average rate of change by (0.25, 1) and (1, 4), the ratio is: rate = y−yx−x=4−11−0.25=30.75=4mihr. Because Grace jogged at a constant speed, no matter which two points you choose, the ratio remains the same. Keep this important concept in mind: A linear function always has a constant rate of change. Calculate the rate of change from (0, 10) to (3, 4), and then calculate the rate of change from (0, 10) to (6, −2). Then, decide whether those three points are on the same line. Since both rates are −2, those three points are on the same line. Rate = y2−y1x2−x1=4−103−0=−63=−2, and rate = y2−y1x2−x1=−2−106−0=−126=−2. Calculate the rate of change from (0, −10) to (−7, 4), and then calculate the rate of change from (0,− 10) to (3, −4). Then, decide whether those three points are on the same line. One rate is 2 and the other rate is −2. Those three points are not on the same line. Correct! rate=y2−y1x2−x1=4−(−10)−7−0=14−7=−2, and rate=y2−y1x2−x1=−4−(−10)3−0=63=2.

quadratic regression

A regression that finds the best-fit curve based on at least three data points.

linear regression

A regression that finds the best-fit line based on at least two data points.

Scatterplot

A set of points plotted on a horizontal and vertical axes, useful for identifying and illustrating trends. The following graph is called a scatterplot. A scatterplot is a graph of data points used to examine the relationship between two variables. Here you see 30 discrete points of data. These 30 points represent responses to Jack's survey of customers. The responses are restricted to integer values by rounding the response time to the nearest whole minute and by asking the customers to give their level of satisfaction as an integer between 0 and 10, with 0 meaning they are totally unhappy with the service and 10 meaning they are completely happy with the service. At first glance, there is not much that can be said about this data. You might note that most of the high points occur when the time is relatively short, and most of the low points occur when the customers have had to wait for a while, but there are exceptions. This information is not significant without some way to analyze the data you have. To really analyze this data, you need a line that comes as close to all the points as possible. This line is called the line of best fit, or the best-fit line. This line tells you, on average, how the data behaves. In this case, "on average" means that the best-fit line falls as close to each point as possible. In general, a function that fits the overall pattern of the data points in a scatterplot is called a model for the data set. The following scatterplot has only 10 points of data and compares the amount of time that the caller spent waiting for a response in minutes to their impression of the politeness of the operator. At a glance, the points of data seem to be higher on the left and lower on the right, so it trends downward from left to right. Callers who had to wait longer for a response found the operators less polite.

Coefficient of Determination

A statistic that measures how well a regression predictions approximate the real data points. An r^2 of 1 indicates that the regression prediction is a perfect fit for the data.

Line of Best Fit or Trend Line

A straight line on a graph showing the best possible match for the data; also called a "trend line."

Grouping Symbol

A symbol such as parentheses, brackets, braces, radicals, and fraction lines that indicates the starting and ending points of a group. In the order of operations, perform the actions inside the grouping symbol first.

Dependent Variable

A variable that is changed by another variable (called the independent variable); for example, when a person is purchasing gas, the cost (the dependent variable) rises as more gas is pumped into a car's tank (the independent variable). The dependent variable is the variable that responds to the independent variable; that is, the dependent variable responds to change. On a graph, the dependent variable is usually labeled on the y-axis, which is the vertical axis. In the example about making $20 per hour, pay is the dependent variable. Another way of saying this is that because the pay responds to, or is affected by, the number of hours worked, pay is the dependent variable. In the example about children's height, height is the dependent variable. Finally, in the example about ordering computers, cost is the dependent variable. As with the independent variables, it can sometimes be hard to identify the dependent variable without context. Also, remember that the independent and dependent variables sometimes switch depending on context, so be careful when you assess every situation.

Quantitative

A variable that is measured on a number scale; a variable with a numeric value. Quantitative variables are number-related; they are things that you can count or measure.

Qualitative

A variable that is not numeric; also called a categorical variable, a qualitative variable describes data that fits into categories. Qualitative variables have to do with non-number characteristics; these are things that you usually observe but do not count or measure.

Big Box Stores

A very large store, typically over 50,000 square feet, of plain design, resembling a large box.

Given a real-world situation and a graph of a function modeling the situation, interpret a maximum or minimum in context.

Al, an IT specialist, wants to manage his company's internet connection, or bandwidth, better to help reduce costs. His boss, Eve, points out that this is possible with the help of a function. The main idea is to run updates, not at the times when bandwidth demand is high (that is, when internet use is at a maximum), but rather during times when demand is low (that is, when internet use is at a minimum). In this lesson, you will identify both local and global maximas and minimas by looking at graphs of functions. You will also interpret maximum and minimum values in context.

Two Horizontal Asymptotes

All logistic functions have two horizontal asymptotes; one is the lower limit while the other is the upper limit. Identifying the occurrence of asymptotes in a situation is important, as it gives you further insight into the problem and helps to identify if a logistic function is truly appropriate for a situation. With that in mind, consider this example: The amount of memory used by Francisca's program, in megabytes (MB), during a test run can be modeled by the function M(t)=1901+50e−0.8t+10, where t is the number of minutes since the program started to run. Using the following graph, does it appear that this scenario truly has a lower and upper limit? You can see that this function does tend toward the y-value 10 in the negative x-direction, while it tends toward the y-value 200 in the positive x-direction. These truly are lower and upper limits then; these are horizontal asymptotes. The lower limit would be the asymptote y = 10, while the upper limit would be the asymptote y = 200. Since these are the values suggested by the publisher, this is good news for Francisca! Take a moment to look at the equation modeling this situation: M(t)=1901+50e−0.8t+10 . The lower limit of y = 10 can be seen at the end of the equation. Where can the upper limit of 200 be found in the equation? It is actually found by taking the 190 in the numerator and adding it to 10. This does give you a way to identify the upper and lower limits of a logistic equation just by looking at the equation itself. The next section includes some more examples of logistic functions, their graphs, and their corresponding upper and lower limits. Make sure you can identify the upper and lower limits just based on the equation as well as in the graph. [The graph shows a curve rises almost horizontally in the second quadrant through (negative 80, 30), passes through approximately (0, 36), then rises almost vertically through the first quadrant, and then turns almost horizontal again through (10, 130).]©2018 WGU, Powered by GeoGebra Example #1 Logistic equation: f(x)=100(1+20e−0.8x)+30 Upper Limit: y=100+30=130 Lower Limit: y = 30 [The graph shows a curve that rises almost horizontally in the second quadrant through (negative 40, 35), passes through (0, 40), then rises in the first quadrant, and then turns almost horizontal through (20, 92).]©2018 WGU, Powered by GeoGebra Example #2 Logistic equation: g(x)=57(1+11e−0.5x)+35 Upper limit: y=57+35=92 Lower limit: y = 35 [A graph shows a curve that falls from the second quadrant through (negative 80, 60) and (negative 20, 40), then falls almost horizontally in the first quadrant through (40, 20).]©2018 WGU, Powered by GeoGebra Example #3 Logistic equation: h(x)=40(1+5ℯ0.08x)+20 Upper limit: y=40+20=60 Lower limit: y = 20 What this means is that when you have a logistic equation, there will always be two asymptotes, and you can identify them from the equation. Recall the general form of the logistic function was the following: f(x)=L1+C×e−kt+m. This means the lower asymptote of a logistic function is always y = m, while the upper asymptote will be y=L+m.

Starting Values

Also "initial value" or "starting point." The output value of a function when the input value is 0.

Y-Intercepts

Also "starting value," "starting point," or "initial value." The output value of a function when the input value is 0. You now know that b is the starting value of a function and m is the slope. In math terms, the value b is called the y-intercept of a line; b tells you the value of y when x = 0. Examine the following two examples of slope and y-intercept. In the first leg of Seth's journey, where he left home and drove for three hours, the y-intercept is 0 because he had traveled 0 miles at his starting point when he began to track his time (t). In the second leg of the trip, after Seth had stopped for food, the y-intercept is 250 because he had traveled 250 miles from home when he resumed his trip. Consider this next example: Seth has decided that it is time to buy a new car. As he does so, follow along and see how the ideas of slope and y-intercept come into play here, too. Seth has picked out the model he wants and gets quotes from four different dealerships, all of which offer plans that would let Seth pay off the car in 6 years (72 months). Here are the four offers: Offer A: $8,000 down payment and $150 a month Offer B: $5,000 down payment and $200 a month Offer C: $2,000 down payment and $250 a month Offer D: no down payment and $300 a month Which is the best deal for Seth? None of the offers include a total payment, so Seth has to do a bit of calculation. Payment in each of these situations can be expressed as a linear function, with t representing the number of months: A(t)=150t+8000B(t)=200t+5000C(t)=250t+2000D(t)=300t For each of these linear functions, the monthly charge is the slope (m, also called rate of change), and the down payment is the y-intercept (b, or starting value). Compare their graphs: Find each linear function's value when x = 72; this is the end of 72 months, or 6 years of payments. It turns out that the deal with no down payment and a $300 monthly payment (deal D) is the worst deal of the four, costing Seth $21,600 in total for the car. The deal with the highest down payment, $8,000, and a monthly payment of $150 (deal A) is the best deal. Seth would pay only $18,800 in total for the car. Here are some important observations: - Deal D, which is D(t)=300t, has the only line crossing the origin because its y-intercept is 0. This is because the starting value for this deal has Seth paying 0 dollars. - Deal A, which is A(t), has the largest y-intercept because its starting value on the y-axis is the highest, at 8,000, for $8,000. - The steeper the line is, the greater the magnitude of its slope. In the graph, Deal D, or D(t), is the most steep, with a slope of $300 per month, and Deal A, or A(t), is the least steep with a slope of $150 per month. Which of the following functions is increasing with the steepest slope? This line has the steepest incline since it has the largest slope.A(x)=30x+6 This line is declining and is the smallest decline of all the lines here.

the number e

Also called Euler's (pronounced "oiler's") number, e is a constant equal to 2.7182818284590452353602874713527...

The correlation coefficient of the following linear regression is 0.91. Which statement is true? The following scatterplot shows Randall Computers' funding for its service department since 2000, with a polynomial regression. Which statement is true for this regression? The following scatterplot shows Randall Computers' funding for its service department since 2000 with a polynomial regression. Which statement is true for this regression?

Although the correlation coefficient is good, the data clearly shows a curve. Other regression models should be tried. A good correlation coefficient should not be the only criteria to consider. By f(−1)≈9.09, funding for the services department was approximately 9.09 million dollars in 1999. Since x = -1 is very close to the data range, this prediction is trustworthy. Although f(−10)≈3.33, the result cannot be trusted because x = -10 is an extreme extrapolation value. Even if Randal Computers Co. existed in 1990, never stretch the regression function for more 50% of the range without having the training of a regression professional.

Changes at an Instant—Instantaneous Rates of Change

An average rate of change is simply a way of seeing how one variable changes with respect to another in a real-world problem. An average rate of change is always an average of how variables are changing over some interval. Remember the change in temperature problem—you were looking at how the temperature changed over a two-hour interval. But what if you want to know the rate of change at a particular instant? The second rate of change, the instantaneous rate of change, is defined as the slope evaluated at a single point or the rate of change at a particular moment. Imagine you are sitting outside on a summer day and there have been lots of clouds all day. From one instant to the next, the clouds may be blocking the sun or they might not be. The instant the sun is not blocked by clouds, you feel an increase in the temperature. Instantaneous rates of change look at rates of change at an instant or over very short intervals of time (on a graph, two points that are very close together). When these intervals are very, very short, the average rate of change is approximately equal to the instantaneous rate of change. All you need to know for this course (with respect to rates of change) are that: There are two types of rates of change (average and instantaneous). You can calculate average rates of change using the slope formula. You should be able to interpret what average and instantaneous rates of change indicate in a situation. On a recent trip, your friend Kyle drove 130 miles in 2 hours. He tells you he drove about 65 miles per hour for most of the trip but did speed up to 70 miles per hour at one point while he was passing someone. Which of Kyle's speeds (the 65 mph or the 70 mph) represents an average rate of change? The average rate of change is 65 mph. It is the speed that Kyle averaged for his 2-hour trip. It is also the average rate of change because it covers a pretty large interval of time. On a recent trip, your friend Kyle drove 130 miles in 2 hours. He tells you he drove about 65 miles per hour for most of the trip but did speed up to 70 miles per hour at one point just while he was passing someone. Which of Kyle's speeds (the 65 mph or the 70 mph) represents an instantaneous rate of change? The instantaneous rate of change is 70 mph. This is because it is the speed that Kyle momentarily sped up to pass someone. This is also the instantaneous rate of change because it covers a pretty small interval of time with respect to his 2-hour trip (he was only going 70 mph for a few "instants" of his trip). Lesson Summary You should now understand the difference between the average rate of change in the amount of drug in your bloodstream after a period of time and the instantaneous rate of change in the amount of drug in your bloodstream at a particular moment in time. Here is a list of the key concepts in this lesson: An average rate of change represents how one variable changes with respect to another over an interval of values (typically an interval of values for the independent variable). You can find the average rate of change between two points by using the slope formula. An instantaneous rate of change represents how one variable changes with respect to another at a particular instant (typically an instant defined by a specific value of the independent variable).

Quadratic Polynomial

An equation that includes one variable raised to the second power (like x^2) and no greater exponent (3 or more); its name comes from Latin (quadratus) because the variable is squared.

Which regression model should you choose if a set of data's y-value decreases by a third, every five years? Why? Which regression model should you choose if a set of data's y-value increases sharply at first, and then becomes flat and approaches 10? Why?

An exponential model should be chosen because the data has a constant ratio. A logistic model fits this set of data, which has a limit and is shaped like part of the letter S.

Validity

An idea's or conclusion's degree of correctness in the real world.

Independent and Dependent Variables

An independent variable explains changes in the dependent variable, while a dependent variable measures the changes. Looking back at the issue of temperatures and ice cream sales, the independent variable is the outside temperature and the dependent variable is the amount of ice cream sold. Put another way, the temperature is the cause and the sales level is the result. Examine how the following graph represents this situation for one of Double Dip!'s stores. Notice that the independent variable, the temperature, is positioned on the x-axis (the horizontal axis) and the dependent variable, ice cream sales, is positioned on the y-axis (the vertical axis). This is the normal convention. However, at times, depending on the context, it may be more appropriate to present a graph the other way around, with the independent variable on the y-axis and the dependent variable on the x-axis. The point (60, 2.4) implies that a Double Dip! store is expected to get $2,400 in income per day when the temperature is 60 ℉.

Instantaneous Rate of Change versus Average Rate of Change

An instantaneous rate of change is ... instantaneous! Algebraically speaking, it means the rate of change for the output value of a function y at a specific input value x. Average rate of change is the rate of change between two x values. Average rate of change is essentially the rate of change for y as x goes from some number to another.

What is an outlier in a data set? Which statement is always true about outliers?

An outlier is a data point that is distinctly different from the rest of the data. In the example involving Gormlaith's team, the coefficient of determination's value improved (became closer to 1) once the outlier was removed.

Temperature Conversion

Another area in which inverse functions are particularly useful is temperature conversion; that is, Fahrenheit to Celsius, and Celsius to Fahrenheit. In the United States, temperatures are generally measured in degrees Fahrenheit. However, data in some fields is almost exclusively communicated in Celsius, as are temperatures in most other nations in the world, making basic knowledge of temperature conversion a useful skill. You can convert a given temperature in Fahrenheit to Celsius using this formula: C(F)=F−321.8. Using this function C(F)=F−321.8 calculate and interpret C(75) to the nearest degree. 75 degrees Fahrenheit is approximately 24 degrees Celsius. C(75)=75−321.8≈24. Using the equation above as the original function, what are its inverse function's independent and dependent variables?

Using Input Values

As a network administrator, you collect data regarding the number of remote connections to the network and then summarize the total number by year. In this case, the number of connections is a function of time. With regard to the overall trend, you can see there is a general increase in the number of remote connections across the years. However, what if you compare the connections specifically from 2005 to 2008? You can see that 2008 had more remote connections (approximately 1000) while 2005 had approximately 780 remote connections. From the graph of the number of remote connections by year, you can understand the overall trend and trends between years.

Each year, a city increases its spending to help homeless people. This linear regression models the number of homeless people in the city's downtown area since 2000. The following chart shows the linear regression with the function P(t)=−20.1t+464.93, where t is the number of years since 2000. Solomon runs a business that tests auto performance for auto manufacturers. Today, he is testing the braking system of a car. The following scatterplot shows data on the distance the car continued after the brake was applied, with a polynomial function to model the data. Analyze the function's extended rate of change and explain why this function should be used only for a limited interval of values.

As t goes to positive infinity, the function's rate of change stays at -20.1, implying that there were approximately 20 fewer homeless people per year in the downtown area. This decrease cannot continue forever in this fashion, since it means the number of homeless people will eventually become negative. As such, this linear regression model should only be used for values between 0 and 12. As t goes to positive infinity, the function's rate of change goes to negative infinity. This implies that the car eventually goes backwards at a certain point after the brake was applied. In reality at some point after braking, the car would simply stop; it would never start going backwards. Based on this data, this regression function should only be used for values between 0 and 3.5.

Given a real-world scenario, a corresponding exponential graph, and an instantaneous rate of change, interpret the instantaneous rate of change in context.

As you are driving, running, or even just walking, your speed changes, sometimes a lot. If you look for an average rate of change while driving, it does not tell you much about how your car speeds up or slows down. Think about braking while driving on a wet or icy road. Your speed does not decrease by the same amount every single instant. Your speed drops quickly when you have enough traction but more slowly when you slide on the slippery patches. Your rate of change when braking on slippery roads changes from instant to instant. Instantaneous rates of changearereally important in a context like that one, and that is what you will explore in this lesson. You will find that, as for average rates of change, knowing the units being measured is critical.

Writing Functional Notation

As you continue with Carlo's situation, try to identify the independent and dependent variables and write the equation of the function described. Note that throughout this lesson, independent and dependent variables may seem less clear than they have in previous examples; the reason for this will become clear later on, but for now, do the best you can in identifying variables as they are described. Back to Carlo. Remember, for every two tablets he purchases, he can replace one of the company's 25 computers. You can treat the number of computers to be replaced as a function of the number of tablets to be purchased. The function's equation would look like C(t)=−0.5t+25, where C(t) is the number of computers to remain, and t is the number of tablets to be purchased. For example, if Carlo decides to purchase 2 new tablets, substitute t = 2 into C(t), and you have: C(2)=−0.5(2)+25=24 The result implies that if Carlo purchases 2 new tablets, his company would have 24 computers left.

Comparing Multiple Variables

As you have probably already figured out, not all scenarios can be described with only two variables. Real-world scenarios are often so complex that multiple inputs lead to one output. These are often called "situations multivariate" since multiple (more than two) variables are involved. Take the revenue generated by any big-box stores. Big-box stores sell tons of products, and some of those products sell faster than others, while other products sell slower but generate more revenue with each sale. As you can imagine, tracking the revenue of such a big-box store likely involves hundreds or thousands of variables. To get at this same idea on a smaller scale, consider a local computer shop. R(L,T)=400L+250T Notice how both L and T appear in the parentheses for R. This is saying that R (revenue) depends on the sale of both laptops (L) and tablets (T). That is to say, there are two independent variables (L and T) to the one dependent variable (R). In a recent sale to a small company, Brant sold 5 laptops and 3 tablets. For this particular sale, his revenue would look like the following: R(5,3)=(400×5)+(250×3), R(5,3)=2000+750, R(5,3)=2,750.

When Is a Function Decreasing Faster?

As you have seen with some of the exponential models, some increase faster or slower than others. Sometimes noticing this can help you identify an optimal situation. With that in mind, consider the following example: Campbell Computers has a new processor that allows for quicker computing times. Campbell's design team investigated the cost-effectiveness of adding more processors and compared with the speed doing that achieved. The model for computing time in seconds, t, for x processors is modeled by the function t(x)=7×0.93x. The graph and the instantaneous rate of change for this function at a point are depicted in the following applet. What do you notice about the instantaneous rate of change as the number of processors increases? As the x-values get larger, the slopes of the lines get closer to 0. The change in the slopes can be seen by the fact that as xgets larger, the slope of the line passing through the point decreases in magnitude slowly. If x kept getting bigger and bigger, then the instantaneous rate of change would eventually get really close to 0. What does this mean? In this context, the decrease in the slopes of the lines means that adding more and more processors in this situation does less and less to speed computations. This is important because it means that buying 500 processors would not be much better than buying 400 processors, but it sure would be more expensive. If you have ever heard the phrase "diminishing returns," you probably recognize that this is a situation that fits that phrase perfectly. Note that the instantaneous rate of change here is getting close to 0, but it will never equal 0. This happens with all exponential functions with a growth factor of less than 1, or 0.93 for this example. For exponential functions that grow, their rates of change will always go to infinity over time. It is unrealistic for something to keep growing infinitely. This is one of the shortcomings of exponential functions—a shortcoming that will be addressed with logistic functions in the next unit. Given the graphs of the functions f(x)=50×e−0.5x and g(x)=50e−0.25x, which will decrease faster as x gets larger? Note: the function f(x) is shown by a solid black curve and g(x) is shown by a red dashed curve. The function f(x) starts with a smaller initial value, or y-intercept, but as x gets bigger, f(x) increases more than g(x).

Revenues with a Better Model

As you have seen, higher degree polynomials can accommodate more turns in the data. With that in mind, revisit the ice cream example. Seasonal data always involves at least two turns (think of these as the "on" and "off" season turns), so revisit modeling ice cream revenues with higher degree polynomials. Remember Neighbor Ice Cream Shop? Earlier, a quadratic was used to model the current year's data because there was only one turn in the data. However, when modeling any seasonal selling item, like ice cream, there are often two turns-one going into the high-activity season and another going into the low-activity season. To account for that, Neighbor Ice Cream Shop built a cubic model for the next year's sales data to account for the two turns in sales. So, the store's daily revenue, r, in dollars, can be modeled by the function r(m)=−1.4m3+30m2−150m+350, where m is the number of months since September. The following graph represents this function. To estimate the shop's revenue on December 1 of the next year, substitute m = 3 into r(3)=−1.4(3)3+30(3)2−150(3)+350 =−1.4(27)+30(9)−150(3)+350 =−37.8+270−450+350 =132.2. This means that the shop's daily revenue on December 1 was $132.20. Compared to the previous year on the same day, daily revenue was up just over $27. Notice that even though a cubic function is used to model the sales data here, the process for simplifying input-output pairs is very similar to what was done before with the quadratic model. One advanced technique is to calculate the input-output pair for a date not corresponding to a whole number. For example, what if you wanted to know the daily revenue for the shop on December 16? To estimate the shop's revenue on December 16 you need to determine what value to substitute into the function, r. Since the 16th is about halfway through the month of December, substitute m = 3.5 into r(m).When this is done, you have: r(3.5)=−1.4(3.5)3+30(3.5)2−150(3.5)+350 =−1.4(42.875)+30(12.25)−525+350 =−60.025+367.5−525+350 =132.475. Again, note that when you calculate −1.4(3.5)3, you need to do the exponent calculation first to get −1.4(42.875) and then multiply. The same order of operation applies when you calculate 30(3.5)2=30(12.25). The result implies that the shop's revenue on December 16, was approximately $132.48. In the function's graph, it does look like the function passes the point (3.5, 132.48). Similarly, to estimate the shop's revenue on March 11 substitute m = 6.33 into r(m), and you have r(6.33)=−1.4(6.33)3+30(6.33)2−150(6.33)+350 =−1.4(253.636137)+30(40.0689)−949.5+350 =−355.0905918+1202.07−949.5+350 ≈247.48. The result implies that the shop's revenue on March 11 was approximately $247.48. And in the function's graph, it does indeed look like the function passes the point (6.33, 247.48).

Inverse Functions Continued 1.2

As you know, some functions do not have clearly defined inputs and outputs. In cases where variables affect each other, instead of focusing on one clear direction of causation, it can be useful to examine both an original function and its inverse. Starting here, you do just that using some familiar examples. Return to Carlo, who is in charge of his small company's IT department. To review: Part of Carlo's job responsibility is making sure the company remains current with the latest technology, especially when doing so could cut costs, increase productivity, or both. Carlo found that the latest high-powered tablets represent a cost-effective way to move the IT department into the future. In fact, for every two tablets he purchases, he can replace one of the 25 computers. You can treat the number of computers to be replaced as a function of the number of tablets to be purchased. The function's equation would look like C(t)=−0.5t+25, where C(t) is the number of computers the company has, and t is the number of tablets to be purchased. To find the value of C(2), substitute 2 for t in that last equation and you have C(2)=−0.5(2)+25=24. This information emerges: If Carlo purchases 2 new tablets, he would still have 24 computers left. Similarly, if Carlo purchases 4 new tablets, he would have C(4)=−0.5(4)+25=23 computers left. But now, what if Carlo wanted to retain only 15 computers? How could he figure out how many tablets he should purchase? For that, his best bet would be to use the inverse function. C(t)=−0.5t+25, where C(t) is the number of computers the company has, and t is the number of tablets to be purchased. To find the value of C(2), substitute 2 for t in that last equation and you have C(2)=−0.5(2)+25=24. This information emerges: If Carlo purchases 2 new tablets, he would still have 24 computers left. Similarly, if Carlo purchases 4 new tablets, he would have C(4)=−0.5(4)+25=23 computers left. But now, what if Carlo wanted to retain only 15 computers? How could he figure out how many tablets he should purchase? For that, his best bet would be to use the inverse function. Let C(t)'s inverse function be T(c), which models the number of tablets to be purchased, where c is the number of computers the company has. Since C(t) is linear, its inverse function must also be linear. The equation describing the inverse function is: T(c)=−2c+50. This implies that for each computer the company keeps, Carlo must give up purchasing two new tablets. Substitute c = 0 into T(c), and you have T(0)=−2(0)+50=50. This implies that if the company replaces all its computers, Carlo can purchase 50 new tablets.

Given a data set, a proposed function to model the data set, a conclusion, the supporting calculation to the conclusion, and a set of identified constraints, evaluate the overall validity of the conclusion.

As you no doubt know, when a small company called Amazon began selling books online in July 1994, very few people gave it much thought, and those who did expected it to fail. Who would buy a book without even having a chance to flip through its pages? Today, of course, Amazon is a huge, wildly successful company. Can Amazon's exponential growth continue? This is the final lesson in this course. By this point, you have reviewed, used, and checked the conclusions of quite a few regression models, and this lesson is really a summary of all you have learned so far. In addition, you will add one final tool to the SOME things to help evaluate a model and its use. Although this last tool does not create quite as tidy an acronym, it is the final important step.

Asymptotes in Exponential Growth and Decay

Asymptotes can be found in a lot of situations. The growth of bacteria, when given a supply of nutrients, is a prime example. Consider a bacteria culture that begins with 3,000 live bacteria and the number roughly doubles every 6 hours. This is an example of exponential growth and can be represented with the function f(x)=3,000×2x6. There is an asymptote here, but it is toward the negative x-values. This means that the bacteria population would tend toward zero if you looked backward in time. The culture of 3,000 bacteria was grown from a smaller culture at one point, possibly from a single bacterium. Human population growth can be modeled by similar exponential equations. For example, the human population of New York City was approximately 19.8 million in 2015, with an average growth rate of 2.1% each year. The population of New York City after 2015 can be estimated with the function f(t)=19.8×1.021t, where t represents the number of years since 2015. In this situation, the population of New York City would tend toward zero as you looked backward in time (toward the negative x-values). Of course, the population of New York City would get smaller and smaller as you looked back in time, until the point was reached when no humans lived in the area at all. Another example of asymptotes is radioactive decay. Certain substances are radioactive, and they decay as time goes on. Radioactive materials always have a half-life, which is the point in time where half of the original material remains. For example, radioactive carbon-14 has a half-life of 5,730 years. This means if you had 100 pounds of carbon-14 today, you would have only 50 pounds of it left after 5,730 years. The other 50 pounds would transform into regular carbon. Here is a graph of what this would look like over time: Every 5,730 years, carbon-14 decreases by half. Think about this for a minute or two. When you keep cutting an amount in half, such as carbon-14, the amount gets smaller and smaller, which makes the amount of it tend toward zero, but it never actually becomes zero. That means this is an asymptote. The function f(t)=33(0.5)t represents 33 grams of a given chemical with a half-life of 8 hours. What factor creates a natural asymptote in this situation? The amount of chemical will tend toward zero as time goes on.The number of grams of the chemical A culture of 100 bacteria triples every 6 hours. What natural asymptote is created in this situation? The number of bacteria Looking back in time, the number of bacteria would tend toward zero.

Polynomial Regressions

At the same time, remember that polynomial regressions are really good at modeling some real-life phenomena, like free-falling objects. Also, polynomial models are the only models in this course that can model data with several turns, so polynomial regressions still have their place and purpose.

Average Rates of Change at Different Values

Average Rates of Change at Different Values Comparing average rates of change can be useful in unexpected ways sometimes. Consider this example: As a call center manager, the number of help desk calls Risa's team gets from a business depends on the number of customers of that business. Risa got a function that modeled the data: C(x)=0.2x2+6x+2, where x is the number of customers and C is the number of calls. The table below summarizes the number of customers and number of calls for five different businesses that Risa's team works with. x = # of customers1030355060C(x) = # of calls 823624578021082 Suppose Risa is curious whether she gets more calls overall from the smaller businesses (x = 10 and x = 30) compared to the larger businesses (x = 50 and x = 60). She decides to use the average rate of change to compare these two situations. She first finds the average rate of change for x = 10 and x = 30, and finds the slope of the line through the points (10, 82) and (30, 362): m = change in ychange in x= (362−82)(30−10)= (280)(20) = 14 calls per customer . Similarly, she then finds the average rate of change for x = 50 and x = 60 by calculating the slope of the line through the points (50, 802) and (60, 1082): m = change in ychange in x= (1082−802)(60−50)= (280)(10) = 28 calls per customer . Risa notices that the slope through the points (50, 802) and (60, 1082) is steeper, meaning that the larger businesses need Real-Life Context for Concavity customer support from her team than smaller businesses. This could be key information when Risa asks for resources for her team or when her company negotiates contracts with larger businesses. In terms of Risa's team, which rate of change is preferable? This really depends on her. On one hand, an increase in customers means job security. On the other hand, more calls can also mean more stress, especially if her team is not equipped to handle more calls. Cathay is a commissioned sales representative who has tracked the number of weekly sales calls she has made compared to the number of alarm systems she has sold that week. Think of the data as an ordered pair in the form (number of calls, number of alarms sold): (0, 0), (10, 3), (20, 15), (40, 24), and (55, 40). Cathay is curious to know at which point she makes more weekly sales on average. Of the choices below, which average rate of change is better for Cathay's sales? The average rate of change between (10, 3) and (20, 15) The average rate of change between (10, 3) and (20, 15) is the greatest at 1.2. This means that Cathay sold about 1.2 alarms per call when she went from 10 calls per week to 20 calls per week. Think back to the introductory example. Did you or Halley have a faster average speed during the first 10 seconds? Since you both covered 35 meters, your average rate of change was exactly the same. But what about during the last 10 seconds of the race? Which one of you had a faster average speed over the interval [20, 30]? You can find your average rate of change by finding the slope through the two points corresponding to your distance covered from 20 seconds to 30 seconds: (20, 60) and (30, 100). m = change in ychange in x= (100−60)(30−20)= (40)(10) = 4 meters/second You can find Halley's average rate of change by finding the slope through the two points corresponding to her data: (20, 65) and (30, 100). m = change in ychange in x= (100−65)(30−20)= (35)(10) = 3.5 meters/second Since you covered 4 meters per second and Halley covered only 3.5 meters per second, you had the greater average rate of change during the last portion of the race. This means you ran about 0.5 meters more per second than Halley. Lesson Summary This lesson focused on interpreting the rates of change and identifying optimal rates of change given the situation or context. Here is a list of the key concepts you learned in this lesson: Sometimes a comparison is needed between two or more average rates of change. For each average rate of change, you first need an interval (like [a, b]), and then you calculate the average rate of change using the slope formula and compare the numbers. Sometimes a comparison is needed between two or more instantaneous rates of change. Given the instantaneous rates of change, you should be able to interpret which instantaneous rate of change would be optimal in the context of the problem.

Instantaneous Rate of Change and a Train

Average rate of change is a way of asking for the slope in a real-world problem. The slope of a line tells you how something changes over time. The average rate of change is defined as the slope over an interval. But what if you want to know the rate of change at a particular instant? The instantaneous rate of change is defined as the rate of change at a particular moment. There are ways to calculate instantaneous rates of change by hand, but you will not have to learn those in this course. Instead, you should be able to identify the line that represents an instantaneous rate of change at a point. In the example below, you will see how instantaneous rates of change for linear functions are the same as the slope of the line. A train has started from Portland to Los Angeles, and it has traveled 180 miles. Let the function D(t) model the number of miles the train has traveled, where t is the number of hours starting now. The following applet shows the graph of D(t), with a slope triangle to calculate its slope at any two given points, A and B. At point H, the slope triangle shows the instantaneous rate of change at any point on the line. Keep in mind that for this course, you just need to know how to interpret and work with instantaneous rates of change. You do not need to know how to calculate the instantaneous rate of change yourself. That said, in the applet you should have noticed that the instantaneous rate of change is still at 55, implying the train travels at 55mileshr, a constant speed. Linear functions always have the same instantaneous rate of change everywhere. Since lines always increase by a constant rate, the linear quantities always increase (or decrease) by the same amount, moment to moment. Because the function is a linear function. A cell phone company charges a flat fee of $94 per month. A customer's cost for service is given by the function: C(m)=94m, where m represents the number of months. After 3 months of service, the cost totals $282. After 4 months of service, the total cost is $376. What is the instantaneous rate of change at the end of the third month? $94 per month The instantaneous rate of change at any point on the line is simply the line's slope. Lesson Summary This lesson focused on linear functions and their average and instantaneous rates of change. Here is a list of the key concepts in this lesson: For a linear function, the slope tells us the average rate of change between any two points as well as the instantaneous rate of change at any individual point. On a linear function, the average rate of change over any two given points is the same. On a linear function, the instantaneous rate of change is the same an any point.

Input-Output Pairs

Avery's store is having a sale on bottles of fragrance. At 1:00 p.m., she checks and finds that the store has sold 15 bottles. At 5:00 p.m., Avery's store has sold 60 bottles. The following table gives the data. TimeBottles Sold1 p.m.155 p.m.60 What is the average rate of change between 1:00 p.m. and 5:00 p.m.? Well, to find that, Avery uses the rate of change formula: m=y2−y1x2−x1=60−155−1=454=11.25 bottles per hour Between 1:00 p.m. and 5:00 p.m., Avery's average rate of sales for fragrances was 11.25 bottles per hour. It is okay to use a decimal in the average value in this situation. You often see such values; for example, you might see that a family in a certain country has 1.73 children on average. Notice that you only needed two input-output pairs to find the rate of change. The following table records Sunrise Sky Spa's membership during a certain period: The following table records Sunrise Sky Spa's membership during a certain period: What's the average rate of change per month in Sunrise Sky Spa's membership between January 2010 and January 2011? Lesson Summary Lesson Summary In this lesson, you used tables of input-output pairs to understand average rates of change, such as pounds lost on a diet, bottles of fragrance sold, or the rise in clock speeds of CPUs over time. Here is a list of the key concepts in this lesson: When looking at a table, you only need two input-output pairs to find the average rate of change. The formula to calculate the average rate of change is m=y2−y1x2−x1. Negative rates of change indicates the value is decreasing. Positive rates of change indicates the value is increasing. Average rate of change tells change over a given period of time; this information can be used to make predictions of future or past events.

Short-Term and Long-Term Differences

Because of how things happen in the real world, what happens in the short term can be very different from what happens over the long term. When this happens, you can compare instantaneous rates of change to determine what each rate of change means in the context of the situation. "Short term" refers to values close to when something is first starting, such as the launch of a new game applet. "Long term" refers to values further out from when something first started, such as many weeks into the launch of a new game applet. For example, recall Clever Apps's release of the new game Zonkers. Examine these two graphs of Scenario 1 to compare what is happening at point A in each graph, as well as the instantaneous rates of change for Week 1 to Week 10. In Week 1, there were about 100 downloads of Zonkers and downloads were increasing at a rate of 150 per week. In Week 10, there were only about 2 downloads per week, decreasing by about 1 per week. Note that a user cannot download part of the game, so the values need to be rounded to make real-world sense. How do the short-term and long-term rates of change compare? If you were to look only at the short-term rate of change in the first graph, you might predict that Zonkers is extremely popular and that the number of downloads is going to continue increasing rapidly. However, once you look at the long-term rate of change in the second graph, you see that after a certain point, there are basically no more Zonkers downloads. Here is the key thing to remember: Short-term rates of change may not predict what will happen in the long term.

Paired Data Points

Begin by looking at these numbers in the following table. (Note that the number of users in each case is in millions, so, for example, the number 400 for the year 2000 is actually 400,000,000 users. The number 2009 for the year 2010 is actually 2,009,000,000—over two billion users!) YEAR WORLDWIDE INTERNET USERS(IN MILLIONS) 1995 40 2000 400 2005 1025 2010 2009 2015 3225 This is good information, but does it "paint a picture" for you? A graph might be a better way to see the information more visually. Begin by identifying the pairs of data points in the table. Each pair contains a year and the corresponding number, in millions, of internet users at that time. Note that the convention is to write the independent variable first—in this case the year—and to write the dependent variable second. So the ordered pairs are: (1995, 40), (2000, 400), (2005, 1025), (2010, 2009), and (2015, 3225). Next, construct a graph with its x-axis representing the passage of time in 5-year increments and its y-axis representing the number of internet users in millions. Finally, plot the pairs of points you just listed. The following graph depicts what you should have plotted: Point A on the graph is equivalent to the pair of data points (1995, 40), and also the same as the first row in the original table. B is the same as the pair (2000, 400) and the second row in the table. C is the same as (2005, 1025) and the third row in the table; D, the same as (2010, 2009) and the fourth row; and E, the same as (2015, 3225) and the fifth row.

Average Rate of Change From a Table

Benjamin is on a diet and has been tracking his weight loss in the following table. WeekWeight (in pounds)02106198 You can rewrite data in the table with ordered pairs: (0, 210) and (6, 198). Then, you can label them as (x1, y1) and (x2, y2). Note that the numbers are subscripts—not superscripts, which means exponents. In this situation, x1 stands for the x-value of the first ordered pair, which is 0. The formula to calculate average rate of change, m, is: m=y2−y1x2−x1. After six weeks, what was the average rate of change in Benjamin's weight? Use the formula: m=y2−y1x2−x1=198−2106−0=−126=−2lbsweek Note that the rate of change is negative, implying Benjamin was losing weight in that period. Benjamin's weight loss can also be seen in the following graph:

Given a real-world scenario, the graph of an exponential function modeling the scenario, and two x-values, interpret why one x-value's rate of change is optimal based on real-world context.

Better Hires, a business-to-business talent-recruiting company, is looking for new contracts, which means convincing potential clients that Better Hires is the best way to go. The company has been growing, and its success has been bringing in more customers. Now the management team just needs to put this persuasive information into a nice pamphlet for potential clients. In this lesson, you will use algebra to find optimal rates of change to entice customers, as Better Hires wants to do, or for other purposes.

Interpreting Rates of Change

Business and IT professionals encounter various rates of change scenarios as well in their daily work. Look at this table of data: x y 0 15 2 21 4 27 6 33 Notice that the function's value increases by 6 units each time its input value increases by 2 units. Divide 62=3, and you can see the function's y-value increases by 3 units each time its input value increases by 1 unit. This implies that the function is linear, with a slope of 3. The starting value (when x = 0) is 15, implying the line's y-intercept is 15. The function's equation is f(x)=3x+15. Next, you will put the function into context. Below are two scenarios that can be modeled by this function. I. Johan is testing the performance of a Yoshida cell phone. It has 4GB of memory, which can run a maximum of 15 background applications at the same time. For each extra GB of memory, it can run 3 more background applications. This scenario fits the data and the function. With 8 GB of extra memory, the Yoshida cell phone can run a total of f(8)=3(8)+15=39 background applications at the same time. II. An online store, is running a promotion. Before the promotion, its daily revenue was $15,000. Since the promotion, its sales increased by $3,000 per day. This scenario also fits the data and the function. Eight days into the promotion, daily sales would increase to f(8)=3(8)+15=39 thousand dollars per day. Red Hot PC Fix fixes personal computers (PCs). It can fix 25 PCs per day with the current staff. The following table shows the number of PCs the company can fix with different numbers of extra employees. Red Hot PC Fix fixes personal computers (PCs). It can fix 25 PCs per day with the current staff. The following table shows the number of PCs the company can fix with different numbers of extra employees. Let the function P(e) model the number of PCs the company can fix per day, where e is the number of extra employees. Write the equation of this function. P(e)=6e+25 79 PCs per day Lesson Summary In this lesson, you got some additional practice with translating a rate of change into real-world meaning. Since rates of change are used everywhere in life, knowing what they are telling you is important. Here is a list of the key concepts from this lesson: You can use multiplication with a rate of change to determine the total amount of change. In a table of data for a function, if the y-value increases or decreases at the same rate, the function is linear. In a table of data for a linear function, the rate the y-value changes with respect to the x-value is the function's slope. In a table of data for a linear function, the y-value when x = 0 is the function's y-intercept.

Why Polynomials are not a good fit/good fit:

By and large, polynomial regressions do not do a good job of looking too far into the past or into the future. That is because polynomials always go to infinity or negative infinity as the x-values get larger (positive) or smaller (negative). There are very few real-world scenarios where numbers can consistently get larger and larger. That means polynomial regressions will not help Sarah out as much as the other models. She eliminates a polynomial regression. At the same time, remember that polynomial regressions are really good at modeling some real-life phenomena, like free-falling objects. Also, polynomial models are the only models in this course that can model data with several turns, so polynomial regressions still have their place and purpose.

Interpreting Qualitative Variables

By now you have mastered the principle of comparing inputs and outputs when dealing with qualitative inputs and quantitative outputs, so it is time to do the opposite. Sometimes the input of a function is quantitative, while the output is qualitative. The following table depicts the results of a 5K race: Place, PRunner Initials, R1stBR2ndMC3rdAL4thEV5thRG Figuring out who came in first is easy. Just look at the table and see that the runner with initials BR won the race. You can also easily tell that MC outperformed EV, and that RG came in behind all the other runners. You can also convert this data to function notation or coordinate notation (but note that you still would not be able to graph this data since you do not have two quantitative variables). You could make an argument that either of these variables could be the independent or dependent variable here; in the data in the next table, it is assumed that P, the placement, predicts R, the runner's initials. PlaceRunner Initials Function NotationCoordinate Notation1stBRR(1)=BR(1,BR)2ndMCR(2)=MC(2,MC)3rdALR(3)=AL(3,AL)4thEVR(4)=EV(4,EV)5thRGR(5)=RG(5,RG)

Metric to Standard Conversion

By now, you have probably realized that one of the most useful applications of inverse functions is in conversions. Here is one more common conversion: miles to kilometers. Betti is planning a road trip with a friend, Christiane, who grew up in Europe. Discussions of distance quickly get confusing. Betti is used to speaking about miles, whereas Christiane uses kilometers. Christiane keeps reminding Betti that a mile is equal to approximately 1.6 kilometers. Using function notation, write the equation of the function you would use to calculate the distance of this road trip in kilometers, if you knew the number of miles Betti and Christiane plan to travel. Then identify the independent and dependent variables of this function. . Independent variable: Number of miles Dependent variable: Number of kilometers

Given the graph of a polynomial function for a real-world problem, translate the input and output pairs of the polynomial function into real-world meaning.

By now, you have seen how you can model various real-world phenomena with polynomial functions, but how do you interpret what a function is trying to tell you? To determine this, you will learn how to translate the input-output pairs of a polynomial into real-world settings. In this lesson you will work on using graphs to estimate input-output pairs and interpret those input-output pairs in the context of a real-world situation. This means you will see some other real-world situations that can be modeled with polynomials and how to make sense out of those polynomial models.

Independent and Dependent Variable Functions

Carlo is in charge of a small company's IT department. Part of his job description includes making sure the company remains current with the latest technology, especially when new technology reduces costs, increases productivity, or both. Carlo has discovered that the latest high-powered tablets represent a cost-effective way to move the company into the future. In fact, for every two tablets Carlo purchases, he can replace one of the company's 25 computers. You know how to derive the important pieces of information from a written scenario like this to construct a linear equation in function notation. In this lesson, you will practice this skill in a few different contexts. It is also time to practice writing functions and identifying variables.

Given two scenarios or graphs of a real-world situation, identify which scenario or graph will increase or decrease at a faster rate in the long term.

Clever Apps is releasing a new game application called Zonkers. The company has high hopes for Zonkers and is excited to analyze the information coming in on the number of downloads by users. Ideally, Clever Apps wants to see high download numbers in the first few days after the game launched, followed by even higher numbers as the word spreads about what a fun game it is. In this lesson, you will see how it is working out. In this lesson, you will compare graphs of long-term instantaneous rates of change to determine which would be an optimal situation for the people involved. You will learn how vital this information can be when people need to make decisions in business or in their personal lives.

Optimal Concavity

Computer viruses are nasty things. If you have ever had one, you know you want the virus to impact your computer as little as possible. Using concavity can help you identify optimal situations in this context. Consider this example: The function M(t)=1001+510e−0.2t models the percentage of memory occupied by the virus Golden Goddess, where t is the number of seconds since the attack started. The following graph depicts this function. From x = 0 to x = 31.17, the function is concave up, indicating that more and more of your computer's memory is being occupied by the virus. This is a worst case scenario in terms of the virus, as it means the virus is spreading faster and faster. From x = 31.17 on, the function is concave down, indicating a decreasing rate of change. In this situation, this would be optimal, as it implies the computer's memory is still being occupied by the virus but at a slower and slower rate. Although the virus was still eating up memory, at least the pace of the destruction was slowing down. Which segment of the function would you prefer, as the owner of the computer in question? Well, assuming that you could not choose "no virus attack," you would no doubt prefer the concave-down segment, where the virus is destroying memory at a decreasing pace. You can also apply this line of thinking to the influenza virus. When you become infected with the flu, you feel it slowly start to affect your body as the virus starts replicating (this would be the concave up portion of the graph). The flu hits its "fever pitch" at the inflection point, but then your body starts to fight back, and the spread of the flu starts to slow down. You definitely want the flu to hit the "slowing down" phase as soon as possible, as that means your body is finally starting to respond and fight back. In this same line of thinking, concave down is optimal for the context of the situation.

Concave Up versus Concave Down

Concave up means that the function's values are either increasing faster and faster or decreasing slower and slower. Concave down means that the function's values are either decreasing faster and faster or increasing slower and slower.

Cubic Functions for Online Games

Consider this next example: Becky manages servers for a company that designs and publishes online games. One server that Becky manages hosts the game Zoo Fever, and she is checking the server's log to find the number of users yesterday. The data suggests that the number of online gamers can be modeled by the cubic function n(t)=−9.5t3+195t2−1215t+3180, where t is the number of hours since 12:00 p.m. (noon). This function is depicted in the following graph. You can see that this cubic function has two "turns." The number of online gamers decreased at first, then increased, and then decreased again. A linear or quadratic function would not be able to model data with two "turns" like this. "Turns" means how many times the data changes direction from increasing to decreasing. Lines (degree 1) cannot handle any turns. The data has to be increasing or decreasing and must do so at a constant rate for linear functions to be helpful. On the other hand, a quadratic (degree 2) can handle 1 "turn" in the data. A cubic (degree 3) can handle 2 "turns" in the data and so on. For complicated data with multiple "turns," a polynomial function with an even higher degree would be needed. However, keep in mind that best practice is to fit the simplest polynomial function that is appropriate to data. For example, data with two turns should be fitted with a cubic polynomial (degree 3). When two turns are very close together (or even on top of each other) they "disappear" from the graph. Consider the following set of functions: g(x)=x3−x h(x)=x3−14x p(x)=x3−116x q(x)=x3 The graph of q(x)=x3 has no turning points, even though the other three graphs have 2. So, it is possible for cubic graphs to have a maximum of 2 turning points, or 0 turning points. However, when we are modeling real-world problems, most of the time we will be interested in the simplest polynomial that can create the required number of turning points, and so we will focus on the maximum number for each type of polynomial, and not these other cases. You will learn more about modeling in Lesson 28. Based on the number of "turns," is this graph a linear, quadratic, or cubic polynomial function? Since there is one "turn," this is likely a quadratic function (degree 2). Since there are two "turns," this is likely a cubic function (degree 3). You might be wondering why Becky would be interested in modeling the number of gamers, n, on her company's servers at any time, t. Such data can be helpful in predicting if more servers are needed, especially if Zoo Fever is going to be expanding in the future or suddenly attracts more users. Having hourly data on the number of users allows Becky to forecast future needs, peak points of use, or when doing an update or patch might be least disruptive to the gamers. For example, to estimate the number of online gamers at 3:00 p.m. yesterday, Becky substituted t = 3 into n(t) and did the computation: n(3)=−9.5(3)3+195(3)2−1215(3)+3180=−9.5(27)+195(9)−3645+3180=−256.5+1755−3645+3180=1033.5 Note that when she calculated, she did the exponent calculation first to get −9.5(27), and then she multiplied. The same order of operation applies when calculating 195(3)2. The result implies that there were approximately 1034 gamers at 3:00 p.m. yesterday. In the function's graph, it passes the point (3, 1034). Similarly, to estimate the number of online gamers at 11:30 p.m., Becky substituted t = 11.5 into n(t) and computed: n(11.5)=−9.5(11.5)3+195(11.5)2−1215(11.5)+3180=−9.5(1520.875)+195(132.25)−13972.5+3180=547.9375 The result implies that there were approximately 548 gamers at 11:30 p.m. yesterday. In the function's graph, it does look like the function passes the point (11.5, 548). This means if Becky had to choose between 3:00 p.m. and 11:30 p.m. to do a server update (assuming yesterday's data is representative of future days), she would likely go with 11:30 p.m. since fewer gamers would be interrupted by the update.

Using Rates of Change

Consider this next example: On a recent trip to the grocery store you saw that apples cost $1.49 per pound. This is a rate of change since this gives you "dollars per pound." What is the cost of buying 5 pounds of apples? It would cost $1.49×5=$7.45 to buy 5 pounds of apples. This is one way you can use multiplication with rates of change to do calculations. Assume you have spent $28.15 at the grocery store on other items, and you plan to purchase x pounds of apples. The function C(x)=1.49x+28.15 models the total cost of your trip. If you purchase 5 pounds of apples, the total cost can be found using the function: C(5)=1.49(5)+28.15=35.60 The total cost of this trip is $35.60. Notice that C(x)'s rate of change is the cost per pound of apples ($1.49 per pound), and the function's y-intercept is the amount of money you have spent before buying any apples ($28.15). A plant is 2.4 inches tall when it is planted, and it grows 0.02 inches per day. How tall will it be after 15 days? 2.7 inches Sarah has $50 in her piggy bank, and she spends $3.75 to purchase ice cream every day. How much money is left after 8 days? $20

Real-Life Context for Concavity

Consider this next example: The function models the temperature inside an oven starting from the second it is turned on. Once the oven reaches the set temperature, it cycles off and on to maintain the temperature at around 350ºF. Break the function into segments by concavity: From x = 0 to x = 10, the function is concave up because the temperature is increasing faster and faster as time goes on. From x = 10 to about x = 12.5, the function is concave down because the temperature is increasing slower and slower and then decreasing faster and faster as time goes on. Note that the temperature rises at first and then falls in this concave-down section. From about x = 12.5 to about x = 15, the function is concave up. The temperature falls at first, and then rises. Note that the concave-up function indicates that the temperature first decreases slower and slower then increases faster and faster. From x = 15 to x = 17.5, the function is concave down because the temperature is increasing slower and slower and then decreasing faster and faster. Note that the temperature rises at first, and then falls in this concave-down section. From x = 17.5 to x = 20, the function is concave up. The temperature falls at first, and then rises. Note that the concave-up function indicates that the temperature first decreases slower and slower then increases faster and faster. settings Why does an oven's temperature work this way? It is because you need a relatively steady temperature when baking things. In terms of an oven, the temperature is concave up during the "preheating" stage (from x = 0 to x = 10). Once the oven is heated, the temperature is concave down while the oven heating element is off (there is a little bit of heat from the element that still heats up the oven slightly, but then the oven starts cooling off slowly). However, once the oven temperature dips down to a certain point, the oven heating element comes back on and the curve becomes concave up (the oven heats back up slightly to maintain a steady baking temperature). Review the graph of this next. Now examine a zoomed-in portion of the function's graph from x = 10 to x = 15. The red triangles help you see the temperature change every 0.25 minutes, or every 15 seconds. From point A to point B, the function is concave down. Keep in mind that concave down indicates that the temperature is increasing slower and slower and then decreasing faster and faster, or that the oven heating element has been turned off. From point B to point C, the function is concave up. Here, concave up indicates that the temperature is decreasing slower and slower and then increasing faster and faster, or that the oven heating element has come back on to slightly heat the oven back up. Reminder In a concave-up segment, a function's values either increase faster and faster or decrease slower and slower. In a concave-down segment, a function's values either decrease faster and faster or increase slower and slower. Overall, there are two ways, or two situations, for both types of concavity. The table below summarizes these situations for both types of concavity: Situation 1Situation 2 Concave UpThe function's values are increasing faster and faster.The function's values are decreasing slower and slower.Concave Down The function's values are increasing slower and slower.The function's values are decreasing faster and faster. That is, concave up happens in one of two ways: the function's values are increasing faster and faster or decreasing slower and slower. On the other hand, concave down happens in one of two ways: the function's values are increasing slower and slower or decreasing faster and faster. In the following graph, during the first 10 minutes, is the function's rate of change increasing or decreasing? In the first 10 minutes, the function's rate of change was increasing. Concave up means the rate of change is increasing, which is what is happening in the first 10 minutes according to this graph. During the first 10 minutes, is the function's value increasing faster and faster, or more and more slowly? Re-examine the graph to answer this question. In the first 10 minutes, the function's value was increasing faster and faster. This is just another way to say a function is concave up. Lesson Summary In this lesson you learned the concept of concavity and how a function behaves in a concave-up or concave-down segment. Here is a list of the key concepts in this lesson: Concavity indicates how things are "turning around" in a given situation. Concave up happens when a function's values are increasing faster and faster or decreasing slower and slower. Concave down happens when a function's values are increasing slower and slower or decreasing faster and faster.

Defining Coordinates

Coordinates provide you with a method to translate equations into pictures, and by appealing to your visual intuition, graphs give you a useful tool. Still, some conventions are needed so that everyone sees the same things when looking at a graph. How do you look at a specific data point, or coordinate? Well, first, you need a neutral location to begin measuring. This neutral location is the origin. In the following graph, the origin occurs at the coordinate A. In the context of a city map, the origin might be the city center. In that context, it may be no surprise that the actual coordinates of A are (0, 0). But what does (3, 2) mean as a specific coordinate or data point? The horizontal coordinate (or location) comes first and the vertical coordinate (or location), second. So, for example, the pair of coordinates (3, 2) describes the point that is 3 units to the right (the positive horizontal direction) and 2 units up (the positive vertical direction) from the origin. This is why location B is the coordinate (3, 2). To remember which direction to start with, remember "h" comes before "v"; that is, the convention is that "horizontal" comes before "vertical." On the other hand, one or both of the coordinates could be negative. In that case, measure away from the origin in the opposite direction. For example, the point (-2, -1) is 2 units to the left (the negative horizontal direction) and 1 unit down (the negative vertical direction). These coordinates do not have to be integers; they can be any two real numbers. For example, (3, 2.001) lies just above (3, 2), and the point (-½ , 0) is half a unit to the left of the origin. The following graph depicts these two new coordinates.

Consider the following function from A to G. On which interval is the function concave down?

Correct! From A to B, the function's y-value is increasing. From A to B, the function graph matches that of y = -x2.

Which statement is true? Consider the life expectancy model. Recall that r2=0.98 in that example. Would it be acceptable to use this equation to predict the life expectancy for someone born in 1972? y=0.1729×(x−1960)+69.73 If so, what is your prediction? Consider the life expectancy model. Recall that r2=0.98 in that example. Would it be acceptable to use this equation to predict the life expectancy for someone born in 2035? y=0.1729×(x−1960)+69.73 If so, what is your prediction?

Data and information can provide the ability to make better predictions. The prediction will be based on facts and trends and not just a guess. Yes, it would be appropriate, since 1972 would be an interpolation value here. The prediction is 71.8 years.This linear model does fit the interpolation values well, so you can just substitute 1972 for x to find the corresponding life expectancy No, it would not be appropriate. The extrapolation value is 2035, and the linear model does not fit extrapolation values well. he year 2035 is well beyond the data, making it an extrapolation value. This linear model would not handle extrapolation values well at all.

Graphs and Data

Data tables and graphs can be understood as two sides of the same coin: both provide the same information, and with one, you can easily create the other. In this lesson, you will learn how to move between graphs and data tables. Any graph you create will be placed within the Cartesian Coordinate System. Examine the following depiction of this system. Typically, the independent variable is graphed along the horizontal x-axis, while the dependent variable is graphed along the vertical y-axis. (Note: Always keep in mind that the context of the problem really tells you which variable is the independent and which is the dependent variable.) A data point's location on the coordinate plane can be written as (x, y) where x is the value of the independent variable at a given point, and y is the value of the dependent variable at the same point. Practice working with the following table of some hypothetical data—a table of sales for a store in its first six months of operation. Month Sales (in thousands of dollars)10.423.234.547.659.6612.4 As you can see, the sales are not in whole numbers. The value 0.4 represents 0.4×1,000, or 400 dollars in sales. Looking at data represented in this way makes large numbers easier to read and understand. In order to graph this data, simply read down the chart, with Month representing the x coordinate in any given pair, and Sales representing the y coordinate. You can then change the data table into coordinates: MonthSales(in thousands of dollars)Corresponding Coordinate 10.4(1, 0.4)23.2(2, 3.2)34.5(3, 4.5) 47.6(4, 7.6) 59.6(5, 9.6) 612.4(6, 12.4) The resulting graph is depicted next using the coordinates just displayed: Now that you have a graph, you can visually analyze the data. Note that sales are trending positively throughout the entire six months, and that the data is relatively linear across the months (meaning it is fairly straight). This indicates a relatively stable or constant rate of growth. Now test your knowledge with another hypothetical situation. The introduction of robotic technology in the automobile industry is already creating disruption, and it is likely to bring about a predictable decline in the number of available jobs in car manufacturing in the future. The predicted number of jobs lost in Detroit (in three-year increments) is shown in the following table: In case you were curious, the graph for this data is depicted next.

Interpreting Tables of Data

Did you know that Tim Berners Lee published the world's very first website, back in the summer of 1991? Since those good old days, the internet has grown by leaps and bounds, just the way a good financial investment grows over time. Sometimes it can be helpful to look at data visually, like on a graph. Sometimes, however, using a table can be more helpful. In this lesson, you will compare the same information in tables, graphs, and function notation to see how one view can shed light on another.

Given a real-world scenario, a corresponding logistic graph, and an instantaneous rate of change, interpret the instantaneous rate of change in context.

Did you know that electrons can only move so fast? Do not misunderstand; they do move fast. But there is a limit to how fast they can move at any one instant, which means there is a limit on their instantaneous rate of change. Since computers rely heavily on electrons for how they work, this limit on the instantaneous rate of change naturally puts a limitation on just how fast computer processors can work. In this lesson you will get additional practice on interpreting instantaneous rates of change in context. You will also see 1) how an instantaneous rate of change measures the change of one variable with respect to another at a particular instant; 2) why an instantaneous rate of change can be thought of as "slope at a single point;" 3) why instantaneous rates of change tend toward 0 for far-distant values; and 4) why knowing the units for a calculation is so important.

A More Meaningful Model for Human Population

Earlier, you saw that human population could be modeled by the exponential function p(t)=0.0000021399×e0.0148317t, where t is the number of years since 1800. However, the Earth's resources are limited, and it is impossible that the rate of increase for human population would keep growing and growing. Since exponential functions always increase at a faster and faster pace, you should consider a different model for global population. A logistic function should be used to model human population instead. That said, keep in mind that there is much debate on which logistic model best represents the world population. There are many such logistic models out there. Here is one possible logistic function to model the human population: P(t)=8.91+1468.29e−0.04t+1.21 Note that P(t) is the world's population in billions, and 𝑡 is the number of years since 1800. The following is the function's graph: To calculate the average rate of change from x = 200 to x = 300, you would perform this calculation: rate of change=change in y−valuechange in x−value=9.94−6.08300−200≈0.04billionyear The result implies that the average rate of increase of human population from 2000 to 2100 would be 40 million people per year. Note that the slope of the line between points A and B is also 0.04 billion people per year. That is no coincidence, since a line's average rate of change is equivalent to its slope. Use the following applet to change the locations of those two points and compare different average rates of change. The applet should help you further understand that the average rate of change is shown by the steepness, or slope, of the corresponding line. Move point B in the following applet to x=400. Notice the change in the average rate of change calculation. Interpret the new average rate of change in this context. Move point B in the following applet to x=400. Notice the change in the average rate of change calculation. Interpret the new average rate of change in this context. From 2000 to 2200, world population would increase by an average of 20 million people per year. Lesson Summary In this lesson, you learned that using the coordinates of two points to calculate the average rate of change is the same as calculating the slope of the line passing those two points. Now you can calculate the average rate of change with two given points and you can also interpret the rate in different contexts. Here is a list of the key concepts in this lesson: The average rate of change between two points is found by using the slope formula. An average rate of change can also be thought of as how values are changing over an interval; that is, between the two points used to calculate the average rate of change. For logistic functions, the wider the interval you consider for an average rate of change, the closer the rate of change will be to zero. The units of an average rate of change are usually the ratio of the dependent variable units divided by the independent variable units.

Exponential Curves versus Polynomial Functions

Even if the applet finds the best exponential curve to fit the given Horizon data, would a polynomial function fit better? That is a great question but rest assured that no polynomial would fit this data better because polynomial functions naturally go up and down (said another way, they have many turns). Moreover, for things that grow (or decay) in predictable, constant ratios like revenue, exponential curves are much better suited.

Applying Function Notation

Examine the following table. YEARWORLDWIDE INTERNET USERS(IN BILLIONS)10.04060.400111.025162.009213.225 You probably notice a difference between this table and the last one. Rest assured that this one contains the exact same data, but it is been renumbered to make things a bit easier to deal with. Here, Year 1 represents the calendar year 1995 and the number of users are expressed in billions, not millions. (To convert into billions, simply divide the usage numbers in millions by 1000.) To apply function notation to this data, assign Year as the input variable to the function f. The function f has an output variable, notated f(YEAR), which in this case is the number of internet users (in billions). Each output f(YEAR) must be a single value paired to each input. The input variable Year is placed inside the pair of parentheses of f, and does not refer to multiplication. Now you can write f(YEAR) = Worldwide internet users in billons . Here is a key point: The data are exactly the same as they were previously in the original table, the ordered pairs, and the graph. Nothing has changed except the notation of the ordered pairs. It is sometimes very helpful to rescale a graph, as in the following graph to make it easier to understand. Examine the following graph. Which point represents a reasonable input year and output value for the calendar year 2011? Since x = 0 represents the year 1994, x = 17 represents the year 2011. From the rescaled graph, you can see that 2.2 billion internet users is a reasonable output value for year 2011. From the following choices, which is the correct function notation to represent the input year and output value for these two calendar years? 1990: Input year = −4 and f(−4)=0.0026 1994: Input year = 0 and f(0)=0.021 If 1995 = year "1," then calendar years 1994 and 1990 evidently rescale to years 0 and −4, respectively. Divide each internet usage in millions by 1000 to find the corresponding output values. Alexander is reading the following table to learn how much snow fell in three winter months in his city. In this case, the appropriate function has Month as the independent variable and the Amount of Snow as the dependent variable. Month Amount of Snow (in inches) January 5 February 2 March 0 Alexander's next question is, "How many months had more than two inches of snow?" In this case, he would change the independent variable to Amount of Snow and the dependent variable to Month—the inverse of his first function. Many times, information is presented in a graph that needs to be interpreted. However, the interpretation depends on the information being sought. A table can be read from right to left or from left to right, depending on which variable is the independent variable and which is the dependent variable.

In this example, what would the funding be for the services department in 2020? In the following example, what was funding for the services department in 1998?

Explanation: $f\left(20\right)=-0.01\left(20\right)^2+0.53\left(20\right)+9.63=16.23$f(20)=−0.01(20)2+0.53(20)+9.63=16.23In 2020, funding for services department would be approximately $16.23 million Explanation: f(−2)=−0.01(−2)2+0.53(−2)+9.63=8.53

Exponential vs. Linear Situations

Exponential vs. Linear Situations To begin, compare two similar situations to see how an exponential function is different from a linear function. Start with the one you should already feel comfortable with: the linear function. Ping invests $10,000 in an account which adds $100 into the account every month. The second month, Ping will have 10,000+100=10,100 dollars in his account. In each following month, Ping will see an extra $100 added into his account. The function used to model the amount of money in Ping's account is linear because it has a constant rate of increase: 100 dollars every month. The function is P(x)=100x+10,000, where P(x) is the amount of money in Ping's account in dollars, and x is the number of months invested. In the equation for Ping, the constant rate, 100, is the slope of the linear equation. The constant rate of increase is what makes Ping's situation a linear function. The number 10,000 is the function's y-intercept, commonly called the starting value in application problems. Now compare that situation with one that uses an exponential function. Faye invested $10,000 in a different account which adds 1% of the money into the account every month. In the second month, Faye would have 10,000×(1+0.01)=10,000×1.01=10,100 dollars in her account. In each following month, Faye's money in the account would be multiplied by 1.01. For the second month, Faye and Ping have exactly the same amount. However, compare the amount of money in Ping's and Faye's accounts after six months using the following tables: Months Money in Ping's Account Change Money in Faye's Account Change 1 10,000 10,000 2 10,000 + 100 = 10,100 +100 10,000 ∗ 1.01 = 10,100 +1.01% 3 10,100 + 100 = 10,200 +100 10,100 ∗ 1.01 = 10,201 +1.01% 4 10,200 + 100 = 10,300 +100 10,201 ∗ 1.01 = 10,303.01 +1.01% 5 10,300 + 100 = 10,400 +100 10,303.01 ∗ 1.01 = 10,406.04 +1.01% 6 10,400 + 100 = 10,500 +100 10,406.04 ∗ 1.01 = 10,510.07 +1.01% Notice that Faye has $10.07 more than Ping does after six months. It may not seem like much, but it can add up over time. We say that the function to model the amount of money in Faye's account changes by a constant ratio instead of a constant rate. Changing by a constant ratio means that the previous amount is always multiplied by a fixed number to get the next amount. The account increases by 1% each month, which means multiplying the current total by 1.01% each month. The function's equation is F(x)=10,000×1.01x, where F(x) is the amount of money in Faye's account in dollars, and x is the number of months she invested. Note that $10,000 is the initial amount of money in the account. The constant ratio is what makes Faye's situation an exponential function. In general, an exponential function is in the form of f(x)=Cax, where C is the initial amount and a is the common ratio (also called the "base" of an exponential function). Exponential functions are very useful when you have a constant ratio. These sorts of scenarios happen all the time in finance, biology, physics, and computer hardware and software. A company's revenues have been increasing each year for the past 5 years. Based on data in the following table, choose either a linear equation or an exponential equation for this function. x (number ofyears since 2000)y (revenues inmillions of dollars)021528311414 f(x)=3x+2 The rate of change is 3, and the starting value is 2, making the equation y=3x+2. A company's revenues have been increasing each year for the past five years. Based on data in the following table, choose either a linear equation or an exponential equation for this function. x (number ofyears since 2000) y (revenues inmillions of dollars) 0 0.1 1 0.2 2 0.4 3 0.8 4 1.6 (x)=0.1×2x Correct! The y-values have a constant ratio (2), and a starting value of 0.1, making the function's equation y=0.1×2x.

Given a real-world scenario either by written description or by graph that has a horizontal asymptote, interpret the horizontal asymptote in context of the real-world situation.

FasterAid offers customer service by phone for various clients. For quite a while, the overall customer satisfaction rating for all FasterAid's clients was stable at 50%, which meant that about half the customers were satisfied with the service. Then FasterAid implemented an intensive training program and its customer satisfaction rating improved to over 80%. However, once training ended, FasterAid's overall customer satisfaction rating dropped to 70% and became stable at that level. If FasterAid's customer satisfaction rating were treated as a function, the graph would have two horizontal asymptotes for the values of 50% and 70%. In this lesson, you will identify horizontal asymptotes and interpret their meanings in real-world situations. You may have worked with horizontal asymptotes in previous units, but the asymptotes you will see in this module will be for general functions you likely have not seen before. You will also learn about limiting factors and whether it is possible for a function to cross its own horizontal asymptote.

Identifying Rates of Change in Linear Functions

For adventure-seekers, the steeper the slope of a roller coaster, the better, because that is when you drop really fast. The rate of change is faster on a steep slope than on a flatter slope. Mathematically, steep slopes also imply faster rates of change. If you have two linear functions, you can find the slopes and then compare which line will increase or decrease at a faster rate by comparing their slopes.

Converting from One Currency to Another

For the sake of this lesson, one U.S. dollar can exchange for 0.86 euro (€) or 0.77 British pound (£). How are these two rates related to slope? Let E(u)=0.86u model the number of euros worth u U.S. dollars, and let P(u)=0.77u model the number of British pounds worth u US dollars. The following graph displays both functions. Since E(u)'s graph increases faster than P(u)'s, E(u) has a larger slope. This is because the rate of change for euros is higher than that of British pounds. If a linear relationship has a higher rate of change, the graph of its line increases faster. Tostle Auto's Series A model can hold 10 gallons of gas, and it consumes 0.04 gallons of gas per mile; Series B model can hold 12 gallons of gas, and it consumes 0.03 gallons of gas per mile. Let A(m) model the amount of gas, in gallons, in a Series A model's tank, and B(m) in a Series B model's tank, where m is the number of miles to drive. Assuming both functions start with a full tank. Which statement is correct? A(m)'s line decreases faster than B(m). It implies the gas in a Series A auto decreases faster. ! The slope of A(m) in −0.4gallonmile, and the slope of B(m) is −0.3gallonmile.

The cities of Frankenburg and Geraldston have been trying to reduce the number of homeless people in their cities. Consider the graph that follows. In the long term, is Frankenburg, represented by f(x), or Geraldston, represented by g(x), doing a better job in reducing the number of homeless people? Be sure to tie your answer to the instantaneous rate of change.

Frankenburg is doing a better job overall. This is because the rate of change for $f(x)$f(x) is mostly negative, meaning a decrease in the number of homeless people in Frankenburg. On the other hand, the instantaneous rate of change for $g(x)$g(x) is mostly positive throughout the graph, meaning an increase in the number of homeless people in Geraldston.

Identifying Optimal Solutions With Linear Functions

Frequently people need to choose between two options to decide which one is better for a given situation—many times, you can do this by analyzing the graph for the two situations and visually identifying an optimal situation. For example, imagine you need to buy a new server at work to host websites. When the top two options—Whaamm and Zippedy—are thoroughly considered, their prices, ratings, and so on are comparable. How can you choose? This lesson will help you answer that question. In this lesson, you will learn how to compare the graphs of linear functions to select the best option, using the slopes, the y-intercept, and the value of x.

Given a real-world scenario either by written description or by graph, identify if the scenario would have an asymptot continued.

Given a real-world scenario either by written description or by graph, identify if the scenario would have an asymptote.

Interpreting Linear Regression Models

Given a scatterplot of real-world data, a linear regression function for the data, and the associated correlation coefficient, interpret the regression function and the associated correlation coefficient in context. Jack recently surveyed customers for his IT department. One question asked customers how long they had to wait to speak to a technician, while another asked about overall satisfaction with the service on a scale from 0 to 10. Each customer's responses are plotted on a graph, showing the time it took to answer the call as the x-coordinate and the satisfaction rating as the y-coordinate. You will see how a best-fit line can describe an upward or downward trend in the data points. You will even predict more data points using this line. You will also see how to tell if the correlation between the best-fit line and the data points themselves is strong or weak.

Given the graph of a polynomial function, translate solutions to polynomial equations into real-world meaning.

Given an input for a polynomial function, you have learned to calculate the output and estimate it by the graph. In this lesson you will learn how you can take a given output and estimate the associated input or inputs by graph. You may also see this phrased as "solving a polynomial equation." How is this skill useful? Consider this situation: The Scarlet Dragon's main dining room only seats 60 people; the restaurant has an overflow area for additional people, but management does not like using that area because it is farther from the kitchen. That said, during what hours of the day should the Scarlet Dragon's staff plan on using the overflow area? This question can be answered by estimating the input using a graph, since you have the output.

Given a real-world scenario, interpret why concave up or concave down would be optimal based on context.

Go into an Upscale Nest store, and chances are good you will not come out empty-handed. The stores are just overflowing with adorable and useful home décor items in every color of the rainbow. How could you not need a set of three lime green vases in graduated sizes? However, the fact is that Upscale Nest has been experiencing more pronounced seasonal variations in sales over the last couple of years. When sales are good, they are great. When sales are not good, they are terrible. You have learned what concavity means in context. In this lesson, you will compare concave-up and concave-down curves to determine which would be a better choice in different contexts and for different points of view.

Given a real-world scenario modeled by a logistic function, interpret what concave up or down means in context.

Golden Goddess is a new virus that attacks a computer's memory. At first, you will not even notice it is running. After a few minutes, the virus eats up more and more of the computer's memory. At some point, the rate of attack slows down, and it finally levels off once most of the computer's memory is used up. To model this situation, a logistic function would be a perfect fit. More importantly, the concept of concavity is important to understand how the virus affects your computer. In this lesson, you will learn about a logistic function's rate of change along the function's curve and how to interpret the changes in terms of concavity. You will also learn about concavity and inflection points.

Estimating Output Value

Graphing linear functions helps to estimate and translate solutions to real-world problems. Consider this example: A typical commission for a real estate agent is 6% of the sale. The following graph shows the relation between the cost of a house and the commission the agent makes, assuming a 6% commission rate. Suppose you are buying a house. If your agent makes $16,000 in commission, how much did the house sell for? To figure this out, find 16 on the y-axis and follow the horizontal graph line y = 16 over to the solid diagonal line (to Point A). Then follow a vertical line from that point to the x-axis to find x is close to 266. The agent would take home $16,000 if the house sells for approximately $266,000. When you estimate values, pay attention to how much each grid represents along each axis. Along the x-axis, each 20 units is divided into 5 grids, making each grid 205=4 units. Point A is in the middle of 264 and 268, which is why you can estimate Point A's x-value to be 266. You are estimating these results; they are not exact. You know the x value is close to 266, but in order to find an exact result, you would need to either zoom in very close on the graph or do the mathematics. In this course, estimation is good enough Similarly, if you want to know how much your agent will make on the sale of your house, you can find the value of the house on the x-axis and match it to a commission on the y-axis. If your house is priced at $125,000, follow the line x = 125 up to the line that relates x and y and then over to the y-axis to find that y is around 7.6. That would be $7,600 for your agent.. Each grid along the x-axis represents 4 units in this graph. Use this information to make better estimations. Summary Lesson Summary In this lesson, you learned how to read a graph to estimate answers to various questions, such as a real estate agent's commission based on the sales price of a house. Here is a list of the key concepts in this lesson: Keys and labels are important when interpreting data given in the form of ordered pairs or specific coordinate values. To find the answer to a question about a linear application using a graph, trace a line from the appropriate axis at the value you have to the graph, then from the graph to the other axis to find the solution. To make a good estimation, calculate how many units each grid represents along the x-axis and y-axis. In function notation, write f(x)=y, not f(y)=x.

Given a real-world scenario either as a written description or as a graph, determine if the scenario would have an asymptote.

Have you ever considered how the number of employees on your team affects workload and overtime? In the business world, there are limits and boundaries that impact the average number of hours that employees can work. With reasoning, these limits can be determined. This lesson is about natural limiting factors, which are called asymptotes in mathematics. There are natural limiting factors in businesses, in IT problems, and to many things in your everyday life. This lesson also includes the examination of asymptotes resulting from exponential functions, both growth and decay models.

Given two scenarios or graphs of a real-world situation, interpret why one scenario is optimal based on rate of change and real-world context.

Have you ever had one of those days where you planned to get a lot done but despite a good start, something happened that interrupted your progress? If you compared the rate at which you were getting things done during the first part of the day to what happened later, the two rates would be quite different. In this lesson, you will compare the differences between short-term and long-term rates of change and learn why it may not be a good idea to use short-term results as a basis for long-term decisions.

Graphs of Inverse Functions

Have you ever looked up someone's phone number in a directory? What about doing a reverse search to find out the owner of a phone number you have? If looking up someone's phone number using her name were a function, then its inverse function would be using the phone number to find out her name. Function = Input: Name → Output: Phone number Inverse Function = Input: Phone number → Output: Name In this lesson, you will focus on just this kind of relationship—finding input-output pairs of a function and its inverse function when you are given a graph. You will also learn how to write the notation for inverse functions and learn when to switch input and output values, Finally, you will see how graphs of functions and the inverses of functions reflect each other.

Given a real-world scenario modeled by a polynomial function, translate a given rate of change of the polynomial function into real-world meaning.

Have you ever taken a pain medicine, felt some relief, and then noticed the effects started to wear off after a few hours? This is because your body metabolizes medicine and then the medicine gradually leaves your system. The concentration, c, in parts per million, of a certain pain medicine in your bloodstream after t hours is modeled by the equation c=−0.05t2+2t+2. The rate of change in the amount of this drug in your bloodstream after a certain number of hours can help explain the diminishing effects of the medicine. The focus of this lesson is finding rates of change for polynomial functions. You will learn about finding average rates of change for polynomial functions using the slope formula, and you will also learn about another type of change—instantaneous rate of change.

Achieving a Goal

Here's a different situation, but one still related to goals. Sunrise Spa's business was declining, and the company had to spend its reserve fund, maintained since 2000. In 2000, there was $550,000 in its reserve fund. The company decided that, from 2000 to 2010, it could not spend more than $45,000 per year on average from the reserve. In 2010, the company had $50,000 left in its reserve fund. Did the company achieve its goal? You can build a function to model the amount of reserve fund left, in thousands of dollars, as time passes. You can plot (0,550), according to the given condition, and then draw a linear function with a rate of −45 thousand dollars per year. The line passes the point (10,100), so the graph implies that if the average rate of spending is controlled at $45,000 per year, over 10 years, the company would have $100,000 left in 2010. In reality, the company had only $50,000 left in 2010, so it missed its goal of keeping the average rate of spending under $45,000 per year. By what percent did the Sunrise Spa miss its goal? Well, the difference between the goal and the real value is 100,000−50,000=50,000. The company overspent by 50,000100,000=0.5=50%.

Broadband

High-speed Internet access that is always on and much faster than dial-up.

Given the graphs of two logistic functions, identify which function will increase or decrease at a faster rate in the long term.

Highcrest Realtors is planning to replace most of its personal computers (PCs) nationwide. Extensive testing is needed at the initial stage to make sure there are no problems in the deployment, so progress should be slow at first. Eventually, however, the function for this initiative should approach the total number of PCs to be replaced. The nature of the project makes a logistic function a good fit to model the number of new PCs to be installed. In this lesson you will dig even deeper into logistic functions, paying special attention to their exponents. You will learn that the number in the exponent indicates how quickly a logistic function increases or decreases, what positive and negative exponents mean, and how to compare two functions that are equivalent except for their exponents.

Working With Graphs Lesson Introduction

How low can you set the price on your product and still make a profit? How many servers do you need to handle the traffic to your site? When is the best time to post new content on your blog to maximize its exposure? How does your tax liability differ if you change from contract work to a salaried position? All of these are mathematical questions involving relationships between quantities that depend on one another These relationships are usually expressed as functions or equations, but you can often find the answers by looking at them graphically, and that is what you will learn about in this lesson. Coordinates are the key. In this lesson, you will learn to identify coordinates on a graph and how to determine where the independent variable and the dependent variable are located on a graph..

Identifying Asymptotes Graphically

Identifying asymptotes graphically is a quick way to spot asymptotes. One thing to keep in mind is that logistic functions will always have two horizontal asymptotes, but some functions, like exponentials, only have one asymptote. Some functions do not have any asymptotes at all, like polynomial functions or linear functions. For now, consider this: The key to identifying a horizontal asymptote graphically is to see when they-values tend towards a specific value. For example, in the following graph, they-values of the function tend towardsy= 5 on the right-hand side of the graph, meaning there is a horizontal asymptote there. Also notice how they-values of the function tend towardsy= 0 on the left-hand side of the graph; this meanst. Here is a second asymptote in the following graph. You can tell when a function has a horizontal asymptote by looking to see if the function's y-values tend toward a specific value either on the left or right side of the graph. Here are some other examples and any associated asymptotes with the function. One horizontal asymptote is at y = 7 on the right-hand side of the graph. [A graph shows a curve that passes through (0, 3), rises through the first quadrant through (2, 3.2), and then turns horizontal from x equals 10 to beyond 16 and y equals 8.]©2018 WGU, Powered by GeoGebra Two horizontal asymptotes are present, one at y = 3 on the left side and one at y = 8 on the right side. [The graph shows a curve that rises with increasing steepness through the second quadrant through (0, 40) to about (1, 52), then slopes down to about (5, 20), before rising again with increasing steepness through the first quadrant through (5, 40). ]©2018 WGU, Powered by GeoGebra No asymptotes are present. On the left side of the graph, the function's y-values do not tend towards any specific value; they just keep decreasing without bound. On the right side of the graph, the function's y-values do not tend towards any specific value, either; they just keep increasing without bound.

Identifying the Inflection Point

Identifying the Inflection Point In the previous section, you saw how all increasing logistic functions were concave up for half of the graph and concave down on the other half with an inflection point in the middle. In this section, you will get more practice on estimating where an inflection point might be. Consider this example: On an island near Seattle, major housing development has been going on since 2000 and people have been moving in, creating a new town. The number of houses on the island can be modeled by the function H(t)=5101+400e−0.8t, where t is the number of years since 2000. This function is depicted in the following graph. At first, the growth in the number of houses being built was fast, almost exponential. But as there was less and less land available in the designated development area, building of new houses slowed down, and eventually the number of houses on the island stabilized. Where is the inflection point in this situation? To answer that, you need to find the point where the function's graph changes from concave up to concave down. The point (7.5, 250) looks like a good estimate for the point of inflection. From x = 0 to x = 7.5, the function is concave up; from x = 7.5 on, the function is concave down. This analysis of the function's graph implies that from 2000 to mid-2007, the number of new houses was increasing faster and faster; however, since mid-2007, the number of new houses was increasing slower and slower. This also means that the rate of growth peaked in mid-2007. Lesson Summary In this lesson, you revisited concavity and learned a new and very important term: inflection point. As you did so, you explored examples that ranged from a computer virus attack to the construction of new homes on a lovely island near Seattle. Here is a list of the key concepts in this lesson: Concavity essentially describes the shape of a graph; concave up matches the graph of y = x2, and concave down matches the graph of y = -x2. For an increasing logistic function, the first part of the graph will be concave up (increasing faster and faster) and the second part of the graph will be concave down (increasing slower and slower). For a decreasing logistic function, the first part of the graph will be concave down (decreasing faster and faster) and the second part of the graph will be concave up (decreasing slower and slower). An inflection point is where a graph's concavity changes. The inflection point is where the instantaneous rate of change is the largest, or most positive, for an increasing logistic function, or most negative for a decreasing logistic function. Lesson Summary In this lesson, you explored three scenarios to gain insights into the real-world meanings of concave-up and concave-down segments in a logistic function. You also noted that the preferred or optimal segment can depend on the perspective of the person affected by the situation. Here is a list of the key concepts in this lesson: An increasing function that is concave up would be preferred if you want the function to increase faster and faster. An increasing function that is concave down would be preferred if you want the function to increase slower and slower.

Examining the Data

If you have been wondering where the logistic functions you have been working with so far come from, you are about to see it. In this section, you will learn how to run a regression analysis to find the function for a data set and how to use the coefficient of determination to determine how good a fit the regression function is to the data. As a first example, consider this scenario: A virus broke out in a large city, and the following scatterplot shows the percentage of people infected since the virus was detected. To predict the number of people who will be infected by the 30th day, you can model the data with a function. The type of function that fits a set of data, is called a function of best fit, and its graph is called a curve of best fit. Having a curve of best fit would be very helpful to know what the behavior of the data is, on average. Note that the phrase "on average" is important. It implies data will not perfectly fit any function. Some data points will be above the function's curve and some will be below. The best-fit curve falls as close to each point as possible, and if the data were changed in any way, the line would move further away from other points. In this course, you use the least-squares regression (LSR) algorithm to find functions of best fit. You will not have to run the LSR algorithm yourself in this course; instead, keep your focus strictly on interpreting the results of an LSR. Suppose someone suggests that you should use an exponential function to fit to this data. Given that this data has the S-shape that is very common to logistic functions, you might be wondering if this is truly a good fit. Even if you do not notice that, that is okay—there are still some other ways to see why exponential functions are not the best to model this data. Consider the following exponential regression: Notice the large gaps between many of the data points and the function's curve. This is also reflected in the very low coefficient of determination, r2 = 0.1598. Remember that each regression has a coefficient of determination, which is a measurement of how well a function fits a data set. The coefficient of determination is always between 0 and 1, with values closer to 1 indicating a strong fit and values closer to 0 indicating a weak fit. A coefficient of determination of 1 means that all data points are on the function's curve, which very rarely happens in real life. In general, a coefficient of determination above 0.7 implies a strong fit between the data set and the function. Another way of thinking about the coefficient of determination is that it gives you an idea of how big a difference you can expect from the data points and the values predicted by the model. In this example, it does not appear that the trend of the data matches with the exponential function's long-term behavior. The data points indicate that the percentage is leveling off since at some point the number of people infected by the virus slows down, but the exponential function keeps increasing more and more. It does not look like this function is a good fit for the data set. Before abandoning this function, try one more thing. Say you need to know the percentage of people in the city infected by the virus 30 days after it is detected. Using this exponential function, substitute x = 30 into h(x), and you find: h(x)=0.9856e0.1887xh(30)=0.9845e0.1887(30)≈283 This means that by the 30th day after the virus started, 283% of the city's population would be infected. While it does not make sense for over 100% of the populace to be affected by a virus, it is possible to look for the point where 100%, or all the people, would be infected. However, the data points clearly show that the number of infected people is growing at a slower and slower rate, especially once the percentage affected reached 28%. Based on that trend, it is unlikely that the virus will ever reach 100% of the population. All of this demonstrates that an exponential model is not a good fit for this set of data. In real life, you would rarely use an exponential function to model data because exponential functions go on infinitely and very little in our real world does that. For example, world population has been growing exponentially for a while, but the earth cannot support infinite people. Moore's Law has been working for a few decades, but integrated circuit technology has hit some bottlenecks lately. Amazon's revenue has been growing exponentially for a while, but it is hard to imagine its revenue will grow forever without limit, as an exponential function does. In real life, there is generally a limit to growth, which makes logistic functions a better model in situations with natural limitations. The next graph depicts a logistic function used to model the virus data set: This graph fits with the data points much better, as you can see by its coefficient of determination, r2=0.9954. Using the following table, you can see that this is a strong model fit, so you can feel confident using this model to predict future values. r2 −valueCharacterization 0.7≤r2≤1 strong model / strong correlation 0.3≤r2<0.7 moderate model / moderate correlation 0<r2<0.3 weak model / weak correlation r2=0 no model / no correlation

Identifying Graphs of Linear Functions

If you have ever gone skiing, then you know the "beginner" slopes are not very steep and the "expert" slopes are extremely steep. It is easy to visualize slope, also known as rate of change, but how do you calculate it mathematically? You find the change in height (change in the y-value) as the distance changes (change in the x-value). Graphically, this ratio can be thought of as "rise" over "run" (great terms in the context of skiing) and the ratio corresponds to the rate of change. In this lesson, you will learn how to identify the graph of a given linear function by its slope and y-intercept, which is the point where the function intersects the y-axis. You will also see how to verify that a given graph is the correct graph for a particular function.

Solutions for Logistic Functions Using Only Graphs

If you have worked through previous units, you probably have seen how you can solve equations via a graph. In this lesson, you will revisit that skill with logistic functions and see how it is the same here. You will see all the steps needed in the examples below. Revisit Sunrise Sky and Retreat Spas to study this. Look again at these companies' functions: S(t)=401+150e−0.7t, R(t)=501+150e−0.9t, where S(t) and R(t) model the market share of Sunrise Sky Spa and Retreat Spa, in percentage, and t is the number of years in 2000. These functions are modeled in the following graph. Reaching 25% of the market share is a milestone for any business because it represents your company holding a full quarter of the possible market. With that in mind, it can be telling to look at how long it took Sunrise Sky and Retreat Spas each to reach 25% market share. You can estimate that by looking at the graphs. On the graph, look for where R(t)'s y-value is 25. Since there is 1 year between each darker grid line on the x-axis, there are 1/5=0.2 years for each lighter grid line. With this in mind, it looks like R(t)passes the point (5.6, 25). In function notation, you can write R(5.6)=25. Either way you look at it, this implies that Retreat Spa had 25% of the city's market share in mid-2005. Similarly, by finding (7.9, 25) or S(7.9)=25, you can tell Sunrise Sky Spa reached 25% of the city's market share in late 2007. S(t) grows more slowly due to its smaller k-value. You can check the solution by substituting x = 7.9 into S(t), and see whether the function's value is indeed 25 by following these steps: Substitute t = 7.9 into S(t)S(7.9)=401+150e−0.7(7.9)Complete the multiplication in the exponentS(7.9)=401+150e−5.53Simplify the exponent (e-5.53 ≈ 0.00397)S(7.9)≈401+150(0.00397)Multiply 150(0.00397)S(7.9)≈401+0.595Add 1+0.595S(7.9)≈401.595Divide 401.595S(7.9)≈25.08 When you check a solution, it is normal to get a value close to the estimated value because an error is expected when people estimate values. Lesson Summary In this lesson, you learned how to estimate the input values by graph when given a logistic function's output value, and then interpret those values in different scenarios. Here is a list of the key concepts in this lesson: To solve an equation like 2=A(x), find where the output values, or A-values here, are equal to 2 and find all corresponding input values, or x-values here. Once you have an estimated solution, you can check it by plugging it back into the equation and making sure you get an output value close to what you expected.

Given two scatterplots of real-world data (one with outliers, one without outliers), the two associated logistic regression functions and the associated coefficients of determination, identify the more appropriate regression function for the data.

If you love movies, you might be interested in subscribing to a service that sells discounted tickets to first-run films. Turns out that a lot of people were interested in such a service. However, like all businesses, this one had its ups and downs, as you will soon see. In this lesson you will learn about the effects of outliers. Outliers can occur for a variety of reasons, varying from spikes in data to incorrectly recorded values. You will need to know how to handle outliers in different situations when modeling data sets, including evaluating a coefficient of determination and taking action when outliers distort it.

Reorganize Your Equation

If your equation is not in the form of f(x)=mx+b, you can rearrange it using the commutative property. For example, if P(t) was given as P(t)=−60+3t, it is a good habit to change it to P(t)=3t−60 so it is easy to identify the slope and y-intercept.

Maximum and Minimum Which mountain represents a maximum value? The following table includes data from the World Bank for the GDP (gross domestic product) for 2021. Of these top 10 countries, which achieved the maximum? RankingEconomy(millions of U.S. dollars)1United States22,996,1002China17,734,0633Japan4,937,4224Germany4,223,1165United Kingdom3,186,8606India3,173,3987France2,937,4378Italy2,099,8809Canada1,990,76210Korea, Rep1,798,534

Imagine you are on a roller coaster. As you leave the boarding platform, you travel up to the highest point of the ride. This is the maximum height of the ride. Then the coaster plummets toward the ground and reaches the lowest point. This is the minimum height of the ride. Another example. Picture a group of children. Lined up in order from tallest to shortest, the tallest child is the maximum and the shortest one is the minimum. Mount Everest in Nepal and Tibet at 29,029 feet above sea level. Mount Everest is the tallest mountain in the world at over 29,000 feet above sea level. United States: Finding a maximum is as simple as identifying the largest value.

As it turned out, point J was not an outlier on the following scatterplot.

In 1998, Apex Online tripled its advertisement investment. More advertisement would bring more revenue, making point J's y-value higher than normal. Advertising is expected to positively impact revenue, so this is an internal influence to the data that should be expected.

Some Outliers

In March, the online game Instinct Fighters became very popular again, experiencing exponential growth in the number of gamers. The next scatterplot displays data on the number of daily online gamers since March 1. An exponential regression line is used to model the data. Maria verifies that an exponential model is appropriate here, but notes there are two possible outliers, points D and J. Because of those two points, the coefficient of determination is 0.92, which is not as high as that of the linear model or the polynomial model. However, the other data points are still very close to the exponential regression function, and the overall coefficient of determination is still very high. The online game Star Myth also started in January. The next scatterplot displays data on the number of daily online gamers for Star Myth since January 1. Expectations for Star Myth's success were mediocre, so the assigned web server could only host 5,000 gamers. However, the game turned out to be more successful than expected, and the number of gamers quickly approached the server's maximum limit. A logistic regression line is used to model the data. For this regression function, points E, F, and L, are possible outliers. Because of these points, the coefficient of determination is currently 0.79. If these points were identified as true outliers and removed, then the coefficient of determination would improve. Even if these points were not true outliers and remained in the data set, ther2-value would still indicate a strong model fit. Note that you do not need to get nervous when you see possible outliers; sometimes, the rest of the data is strong enough to compensate for points lying away from the general trend.

Using Functions

In a more formal sense, a function is a mathematical relationship between two variables where every input value is matched with exactly one output value. Think of a function as a "machine" that takes an input and produces an output based on the input. For instance, think of an automated teller machine (ATM). The ATM takes a few inputs (your card, your PIN, and the dollar amount you request) and produces money as the output. A function is simply a "rule" that takes one or more inputs and provides a specific output. Function notation is given as: y=f(x). But f(x) does not mean to multiply f times x. It is read as "the value of f at x" or just "f of x." Another way of saying this is "for a given input of x, what is the function's output, f(x)?" Although f is often used to represent a function and x is commonly used for the variable, you really can use any letters you want. You could use g(x) or W(a). You usually choose letters that represent what the function does or what it calculates. Consider this example again: You go to the gas station to fill up your tank. If gas is $2.30 per gallon, then a cost function depends on the number of gallons needed to fill up your tank. The variables C and g can be used for cost and gallons, respectively. Then the function might be C(g)=2.30g. This means that when you input a number of gallons, the function, C, gives you the cost of that many gallons. On the other hand, what if you only have $20 for gas? You are not interested in knowing the cost; you already know that the maximum cost is $20. In this situation, you are more interested in knowing how many gallons of gas you can get for $20. So here, the independent and dependent variables switch. Now you would be looking at a function that takes the cost, C, and outputs the number of gallons of gas, g, you can buy. In this situation, the corresponding function would be g(C)=C/2.30. You can also use functions to look at how investments grow over time. The simple interest, I, you earn on an investment of $1,000 for one year is a function of the annual interest rate, r: I(r)=1000r A final example showing the real-world applicability of functions is the conversion of temperatures. To convert from Celsius to Fahrenheit, you use the simple rule, or function, to multiply by 95and then add 32. Input a value in degrees Celsius and the function provides output in terms of degrees Fahrenheit. F(C)=(95)C+32 Keep in mind that sometimes the exact function that relates two variables is known. Sometimes all that is known is that there is some relationship or association between two variables. Even spotting that is a great place to start. In fact, in later lessons, you will learn how to turn raw data into a function so that you can predict values between variables. In today's data-intense world, you can imagine how being able to predict future values based on data from today would be extremely helpful.

Determining X-Values on Graphs

In comparing the two server options mentioned in the introduction above, your manager has asked that you compare how the two servers manage resources as there are more and more visitors to the servers, measured in the number of "hits." You get some data from Whaamm and Zippedy to model this and figure out which server does better with more and more visitors. Let W(x) and Z(x) model those two servers' available resources, in percentages, where x is the number of hit requests in thousands. The higher the percentage of available resources the faster the server will perform. As you can see in the following graph, Whaamm, represented by W(x), is very fast when the number of users is low, but its speed decreases more rapidly. Zippedy, represented by Z(x), starts out slower, but it loses speed at a slower rate, so more users can be using the websites on this server before its speed degrades to the point where it causes problems. The better server depends on your needs. If your company has only a few users, you will do better with Whaamm. Its higher speeds will work better for those relatively few users. If you have a lot of users, say, more than 27,000 hits at the same time, you will probably want Zippedy, as additional users will slow down overall speed less than Whaamm's server will. The slopes and y-intercepts of the two lines help to determine which is the better server in a given situation. The slope of the red solid line is -2 and its y-intercept is 80, while the slope of the blue dashed line -1.25 and its y-intercept is 60. This implies that the red solid line starts higher than the blue dashed line, but its slope is steeper—decreasing more rapidly. While the red solid line has a higher starting value, its steeper slope means that it will cross the blue dashed line at some point. When the red solid line is higher, the server it represents is faster. To the right of the intersection, the server represented by the blue dashed line will become the better choice. Lesson Summary In this lesson, you learned how to find an optimal situation by analyzing a graph and the input-output pairs. Here is a list of the key concepts in this lesson: Comparing two or more graphs of linear functions can help you pick the best option out of several solutions to a problem. The best options depend on the slopes and y-intercepts of the linear functions and likely on some given value of x. Read the problems carefully to make sure you know what you are looking for-highest prices or lowest prices, for example. Contractors are frequently hired to perform tasks that businesses need to have done, but which require specialized knowledge that the company does not have on its staff. Marisol needs somebody to figure out what improvements can be made in her office to improve employee morale and get more work done. She finds two highly recommended contractors who could do the job. Tom charges $75 an hour. Geri only charges $50 an hour, but requires an additional flat fee of $500 to start the work. Using the following graph, who is the better contractor for Marisol's needs? The answer depends entirely on how much time Marisol's job takes. The graph shows that if she wants somebody who will only be in the office for a couple of days, Tom is probably her best bet. Up to 20 hours, he costs less than Geri. On the other hand, if you expect the investigation and remediation to take longer than 20 hours, Marisol should probably call on Geri. While Geri charges more up front, her services will cost less in the long run. Note that at 20 hours of work, Tom and Geri charge exactly the same amount: $1,500. Twenty hours at $75 an hour is $1,500. Twenty hours at $50 an hour is $1,000, plus the $500 flat fee, also equals $1,500. So if the job takes exactly 20 hours, it does not matter which of them Marisol hires, and she can probably justify hiring either one of them of them if she does not expect the job to take much less or much more than 20 hours.

Changes Over Time: Average Rates of Change

In linear functions, there are no "turns"; the function increases or decreases at the same rate forever. This is one of the defining characteristics of lines. However, polynomials do not behave this way. An nth degree polynomial can have as many as n-1 turns. Being able to talk about how quickly a function increases or decreases is very helpful. For example, if the linear function E(t)=3t+7 measures the number of employees (E) that are being hired each month (t) starting in January of next year, the slope of this line (m = 3) indicates that 3 additional people will be hired each month. While there is not one simple number to describe things like this for nonlinear functions, the idea of a rate of change can be used to achieve something similar for nonlinear functions. The average rate of change is the result when you take the change in the dependent variable divided by the change in the independent variable. This gives you a ratio of how one variable changes with respect to another variable—here, that would be how the dependent variable changes with respect to the independent variable. In the pain medicine example, the faster the medicine gets into your system, the quicker you experience some pain relief. This idea of "faster" implies rate of change. An average rate of change measures how much of the medicine is in your body over time. If it is too high, you might overdose or experience other complications. If it is too low, you will not experience much pain relief. Mathematically speaking, the average rate of change is the rate at which one quantity changes with respect to another quantity over a specified interval. The rate of change or slope can be either constant or variable. A constant rate of change means it is always the same. This is a linear function. Nonlinear functions have variable rates of change. The slope formula can be used to calculate any average rate of change. This means that for two points on a curve, (x1,y1) and (x2,y2), the slope of the line through the points will be the average rate of change from x1 to x2. The method for calculating average rate of change, or slope, remains the same for both linear and nonlinear functions. Remember, slope, m, is found by dividing the change in y by the change in x, or this formula: m=y2−y1x2−x1 Consider this example about temperatures. At 9:00 a.m., the temperature was 63 degrees Fahrenheit and at 11:00 a.m. the temperature was 71 degrees Fahrenheit. These two data points are the coordinates (9, 63) and (11, 71). m=change in ychange in x=(71 −63)(11 −9 )=(8)(2)=4. What if you had used the coordinates (0, 63) and (2, 71) for the two data points? This would make 9:00 a.m. "time zero," or t = 0, and 11:00 a.m. t = 2 (2 hours later). This allows you to think of the units while you work with the coordinates. The slope through the two points (0, 63) and (2, 71) would then be: m=change in ychange in x=(71°F −63°F)(2 hours −0 hours )=(8°F)(2) hours=4°Fhours. Either method results in the same answer, but looking at it more mathematically with the units in the slope formula can make it a little clearer that the average rate of change in temperature is 4 degrees per hour. For the function: y=5x3−9, is the rate of change constant or variable? The rate of change is variable. The explanation is that $y=5x^3-9$y=5x3−9 is a nonlinear function. With a nonlinear function, the slope varies over different intervals, so the rate of change is variable Suppose the function y=x3+x measures the number of people, y, that come into a bakery when it opens at time x = 0, where x is measured in hours. What is the average rate of change for the first two hours the bakery is open? The first two hours the bakery is open are time values x = 0 and x = 2. Between these two time values, the average rate of change would be m=change in ychange in x=(23 +2)−(03+0)(2−0 )=(10)−(0)(2−0)=5. The value, V, of your college savings account with a deposit amount of $500 and 8% simple annual interest can be calculated using the formula: V=500+500(.08)t, where t is years. After 1 year, your account has $540 and after 5 years, your account has $700. What is the average rate of change in your savings account (round to the nearest dollar)? Correct! Your savings account increases on average by $40 a year.

Given a scatterplot of real-world data, a logistic regression function for the data, and the associated coefficient of determination, interpret the regression function and the associated coefficient of determination in context.

In real life when you use math to solve problems, you rarely have a function waiting for you to be analyzed. Instead, you usually see some collected data, and you need to plot the data and then model it with a certain type of function (usually linear, polynomial, exponential, or logistic). In this lesson, you will model some data with logistic functions, which means to find a function of best fit to predict future data points and find the strength of the selected function from its coefficient of determination.

Given two polynomial graphs of data for two real-world situations, identify the optimal situation based on the real-world situation and the input and output pairs.

In real life, there are usually several possible solutions to any problem. The question then becomes "Which solution is optimal?" For example, Macro Games, a start-up online game development company with limited funding, needs to decide which one of its mobile games to promote. The good news is that the company can use data model graphs to objectively compare two (or more) solutions, then make their decision. In this lesson, you will see some examples of models for potential solutions to problems, including those for Macro Games, and then you will use the models based on graphs to identify the better solution.

Given two exponential graphs of data for two real-world situations, identify the optimal situation based on the real-world situation and the input and output pairs.

In real life, there is usually more than one solution to any problem. The question often then becomes "Which solution is best?" When you have data and can put solutions to models, the models become ways that you can objectively compare two (or more) solutions. In this lesson, you will see some more examples of models for potential solutions to problems and then use the models to identify the objectively better solution. This will help you identify an optimal solution or situation based on graphs. CPU Clock Race Several businesses produce central processing units (CPUs). And much like any product on the market, not all products are created equal. Below, you will see how two companies' CPUs compare to one another. Consider this example: A research team at a university split into two computer processing unit (CPU) companies, Kavox and Nadir, in 2000. Both companies started selling CPUs with the same clock rate 500 megahertz (MHz), but their businesses went on different paths. The CPU clock speed at Kavox and Nadir can be modeled by two exponential functions: K(t)=500×1.12t N(t)=500×1.08t, where t stands for the number of years since 2000. This applet shows the graphs of both functions. You can see that Kavox started improving more and more over Nadir from the graphs. In 2010, how much faster was Kavox's CPU? To answer this question, you could use either of two methods. For a more exact answer, you could substitute x = 10 into both functions, and then find the difference in their y-values: K(10)−N(10). Alternatively, you could have the applet calculate these values for you by dragging the points so that both points' x-values are 10. You should have point A (10, 1079.58) on Nadir's graph, and point B (10, 1552.92) on Kavox's graph (or two other nearby points). These answers imply that Kavox's CPU was 1552.92−1079.58=473.34 MHz faster than Nadir's CPU in 2010. To compare the two companies over time, find how quickly both companies achieved a 1000-MHz processor. To answer this question, you need to solve two equations. Substitute y = 1000 into both functions, and you have: 1000=500×1.08t and 1000=500×1.12t. Remember, you do not focus on solving equations by hand in this course. Instead, estimate the solution from the graph. Practice doing this just using the graph first. After you have a good estimate for the solutions that way, you can check your work by dragging both points in the applet until their y-values are 1,000. You should have point A (9.01, 1000) on Nadir's graph, and point B (6.12, 1000) on Kavox's graph (or two nearby points). This implies that Kavox produced 1000-MHz CPUs 9.01−6.12=2.89 years ahead of Nadir. This is a huge gap between the companies. A major advantage of this sort of analysis is that it is purely objective, allowing you to compare situations from a strictly numerical perspective. In 2012, how much faster was Kavox's CPU than its competitor's? You dragged those two points until both points' x-values were 12. You should have point A (12, 1259.09) on Nadir's graph, and point B (12, 1947.99) on Kavox's graph (or two nearby points). This implies that Kavox's CPU was 1947.99−1259.09=688.9 MHz faster than Nadir's CPU in 2012. How many years before Nadir did Kavox reach 1,200 MHz in CPU clock rate? To see this, set the points in the applet so the y-value is 1,200 for Nadir and Kavox. You should see something close to point A (11.38, 1200) on Nadir's graph and B (7.73, 1200) on Kavox's graph. The x-values then show when each achieved the 1,200-MHz CPU clock rate, which means Kavox achieved 1,200 MHz about 3.65 years earlier than Nadir since 11.38−7.73=3.65. Business Profit Race You know that Kavox had faster processors, so you may think that Kavox sold more processors. In fact, although Kavox manufactured faster CPUs, its revenues lagged behind Nadir's for a while, as Nadir had a better marketing team. The two companies' revenues can be modeled by the following functions: K(t)=3.2×1.15t N(t)=7.4×1.07t, where t stands for the number of years since 2000. This applet shows the graphs of both functions. According to the functions' formulas, you can tell Kavox's revenue was $3.2 million in 2000, while Nadir's revenue was $7.4 million that year. By their graphs, both companies' revenues increased exponentially, but Kavox's revenue increased faster. When did Kavox catch up with Nadir in terms of revenue? Move point A to where those two functions intersect. You should see A (11.63, 16.25) or a point close by. Since the x-values correspond to years since 2000, Kavox caught up to Nadir's revenue a little over halfway through 2011. In 2009, which company had a higher revenue? Why? In 2009, Nadir had a higher revenue because, when x = 9, Nadir's function had a larger y-value than the y-value in Kavox's function. In 2014, what was the difference in those two companies' revenues? Drag those two points until both points' x-values are 14. You should have point A (14, 19.08) on Nadir's graph, and point B (14, 22.64) on Kavox's graph (or two nearby points). This implies that Kavox's revenue was 22.64−19.08=3.56million dollars more than Nadir's. Lesson Summary In this lesson, you compared x-values and y-values on two functions' graphs. You saw that Kavox produced 1,000-MHz CPUs 2.89 years earlier than Nadir, and Kavox's revenue surpassed Nadir's in 2012. Here is the key concept you learned in this lesson: You can identify the optimal situation out of two similar cases by comparing the input-output pairs shown by the exponential graphs of data.

Given a logistic function and an input, calculate the corresponding output.

In science fiction, authors sometimes invent worlds that operate by rules different from those experienced here and now, on Earth. Sometimes, there are no limits in those fictional worlds—innumerable parallel universes; instantaneous travel; unrestricted population growth to the point where people cover every inch of a planet or live underground in burrows. But that's not the "real world," is it? In our world, today, there are nearly always limits, or constraints. In this unit, you study logistic functions, which have more real-world applications than exponential functions do. In this lesson, you will learn that a logistic function generally has a starting point and increases until nearing its natural limit. You will also calculate outputs of logistic functions given an input, following the normal order of operations.

Given two logistic graphs of data for two real-world situations, identify the optimal situation based on the real-world situation and the input and output pairs.

In some professions, working with logistic functions is part of the job. Often, understanding these functions drives important decisions, like which computers to buy or what adjustments are needed in a marketing strategy. In this lesson, you will learn how to make choices just like those, between two or more logistic functions based on different scenarios. You will also learn how vital it is to understand the exact context of the problem you need to solve.

Moore's Law Continued

In the IT industry, Moore's Law states that about every two years, the number of transistors that can fit on a circuit doubles. The word "doubles" tells you this change has a constant ratio, so you can model Moore's Law with an exponential function. Consider this next example: In 1979, Intel released its 8088 processor, which had 29,000 transistors per integrated circuit (IC). Using Moore's Law, you can model the number of transistors per IC with the function T(x)=29,000×2x2, where x is the number of years since 1979. The exponent is x2 because Moore said the number of transistors doubles every two years. Use this function to figure out what Moore thought the number of transistors per IC would be in 1985. There is a six-year difference between 1979 and 1985, so you need to substitute x = 6 into T(x), and you have: T(6)=29,000×262=29,000×23=29,000×8=232,000. From 29,000×23, you need to calculate the exponent before multiplication. The next step gives you 29,000×8. The result says that processors manufactured in 1985 would have approximately 232,000 transistors per IC. In that year, Intel released its 80386 processor, which had 275,000 transistors per IC. Next, see what Moore's Law predicted for the number of transistors per IC in 2000. The difference between 1979 and 2000 is 21, so you will substitute x = 21 into T(x), and you have: T(21)=29,000×221/2=29,000×210.5=29,000×1,448.1546...=41,996,486. The result says processors manufactured in 2000 would have approximately 42,000,000 transistors per IC. In that year, Intel released its Pentium 4 Willamette processor, which had 42,000,000 transistors per IC. Since 2000, a company's revenues double every three years. The following function models the company's revenues, in millions of dollars: f(x)=3×2x3, where x is the number of years since 2000. Calculate the function's value when x = 6, and interpret the result in this context. The function's value is 12 when x = 6. It implies that the company's revenues were $12 million in 2006. Using Moore's Law, you can model the number of transistors per IC with the function T(x)=29,000×2x2, where x is the number of years since 1979. Find the function's value when x = 4 and interpret it in this context. T(4)=116,000. This implies that approximately 116,000 transistors can be made into an IC in the year 1983. Correct! Substitute x = 4 into the formula, and you have: Lesson Summary In this lesson, you learned that an exponential function has a common ratio. The function to model the amount of money in Faye's account, in dollars, was F(x)=10,000×1.01x, where x is the number of months passed. The number 10,000 is the initial value, while the number 1.01 is the exponential function's common ratio. Here is a list of the key concepts from this lesson: Linear functions have a constant rate of change, also called the slope of the line. Exponential functions have a constant ratio, meaning that quantities grow or decrease by multiplication. Exponential growth occurs in many different disciplines including business, finance, IT, biology, chemistry, and physics. When calculating inputs and outputs, remember to simplify exponents first, then perform multiplication and division. If there is any addition and subtraction, do those last. When interpreting inputs and outputs of exponential functions, interpret inputs and outputs using the units of the independent and dependent variables, just as you do with linear and polynomial functions.

An Increasing Increase

In the following example, you will be comparing instantaneous rates of change at different values to identify the best instantaneous rate of change. Keep in mind that "best" is always based on the context of the situation, so make sure you understand the context completely here. Consider this example: Better Hires has tracked data from the number of client leads they have had over time. The function that modeled the number of client leads after x weeks was h(x)=18×1.03x. There are a couple of key facts that came from this function. First, as x gets bigger, the instantaneous rate of change increases. This fact is illustrated in the following graph. Second, as x increases, so does the instantaneous rate of change. This is because of the 1.03 in the function. Each time x increases by 1, the previous amount is multiplied by 1.03. This means that 1.03 is multiplied by a larger and larger number at each step. This proportional growth guarantees that the function grows more and more. Look at the table below, which shows the difference in the value of the function as x increases. xh(x)Difference from Previous h(x)018.000.00118.540.54219.100.56319.670.57420.26 0.59 520.870.61 This trend will continue. Each time the function multiplies by 1.03, the result can be seen in the ratios 18.5418=19.1018.54=1.03. Look back at the table and notice that for each row, "Difference from previous h(x)," grows a little. For example, in row 3, the difference is 0.57, while the previous row, row 2, displays a difference of 0.56. The differences, which are actually the average rates of change from onexto the next, are growing slowly. Asxgets bigger and bigger, the differences grow faster and faster. Since the average rate of change is growing, so is the instantaneous rate of change. The base of the exponent, 1.03, is more important to the overall instantaneous rate of change of the function than the starting point of 18. The 1.03 means that the function's instantaneous rate of change is increasing. In general, if the base of the exponential is greater than 1, then there is growth occurring. This is because each increment over 1 in the x-axis means an increase from the previous amount. Consider this Situation If the base of the exponent were exactly 1, what would multiplying by the base do? There would be no change at all because the function would simply produce the same result for each change in x. As an analogy, think of multiplying your age by 1, which simply produces your exact age again. No matter how many times you repeat that process, the result does not change. On the other hand, if the base is between 0 and 1, such as 0.37 or 0.68, then the function is decreasing. This is because each increment on the x-axis means a decrease in the previous amount. For example, multiplying 18×0.68=12.24, and 12.24×0.68=8.3232. As a function using 0.68 as its base continued, the result of each multiplication step would continue to decrease. The bottom line is that the bigger the growth factor, the faster and bigger the increase over time. A function with a growth factor of 1.07 will grow faster as x gets bigger than a function with a growth factor of 1.05. The short-term growth of a function is a little harder describe without using a table. A function with a larger starting amount will grow faster than a function with a smaller starting amount, as you can see in this example: The function g(x) has a larger growth rate than f(x). Even if the starting amount for f(x) is greater than that of g(x), as x gets bigger, the instantaneous rate of change for g(x) surpasses the instantaneous rate of change for f(x). This is depicted in the following graph. Growth rate is the most important determinant for how the function acts as xgets bigger. Rob is investing in some stocks for retirement. He discovers one company, Erata Industries, whose stock prices are growing according to the function f(x)=12×1.2x. Another company, Torbin Production, has stock prices that are growing according to the function g(x)=35×1.13x. Rob is not retiring for many years. Which company should he invest in? Explain your answer. Rob should invest in Erata Industries. The value of the stock is modeled by $f(x)$f(x), which will have a greater instantaneous rate of change as x gets bigger. This is because the growth factor is greater: 1.2 > 1.13. This means that the stock modeled by $f(x)$f(x) is going to grow more, despite its initially lower value. Rob should invest in Erata Industries. The value of the stock is modeled by $f(x)$f(x), which will have a greater instantaneous rate of change as x gets bigger. This is because the growth factor is greater: 1.2 > 1.13. This means that the stock modeled by $f(x)$f(x) is going to grow more, despite its initially lower value. The pond modeled by the function $j(x)$j(x) will have a greater instantaneous rate of change as x gets bigger. This is because the growth factor is greater: 1.31 > 1.24. This means that as the number of months grows, this will be the faster-growing pond.

When Outliers Impact Regression Models

In the following example, you will revisit the linear regression that Jessica prepared for her real estate client to see how she handled some possible outliers. In general, when a data point lies away from the general data trend, you need to examine the data point and decide if it is a valid data point, thus keeping it in the data set, or if it is a true outlier, thus removing it from the data set. Consider Jessica's scenario again: Jessica's client was wondering what a reasonable rent price is for a 2000-square-foot house. The following graph is Jessica's original scatterplot with her original linear regression. Jessica's original linear regression equation was f(x)=1.66x−427.12, but notice that point L is a possible outlier that is separate from the rest of the data. When an outlier appears in a data set, first find out why the outlier happened. If the outlier can be explained as a mistake or a special condition, then the outlier can be removed from the data set. If the outlier happened naturally and under no special circumstances, then leave it in the data set. Jessica found that the particular rental place associated with point L was actually a basement, not a separate rental unit, which is why its rent was so low. In this situation, it is reasonable to remove point L since a basement is not really comparable to an apartment. Jessica removed point L and ran a new regression modeled in the following graph: Once Jessica removed point L from the data, the regression line changed from f(x)=1.66x−427.12 (with Point L) to g(x)=1.56x−280.75 (without Point L). Here is a key point: The outlier, point L in this situation, had two big impacts on the regression line.First, point L caused the original regression function to be lower (with a smallery-intercept). This makes sense because point L was located lower on the graph than the rest of the points, so it "pulled" the left part of the line toward it. Second, point L also made the original regression function steeper. Because point L pulled the left part of the line lower, the right side of the line went higher, like a seesaw. Together, both effects meant that if Jessica had used f(x) with point L included to predict the rent for a 2,000-square foot house, the rent would have been overestimated. Her client would have set too high a rent level and might not have found any willing renters at that price. That is why identifying possible outliers and removing true outliers is so important. After Jessica removed point L, she noticed that point K was also a possible outlier. Jessica found that the house associated with point K was located in an historic district, which justified a higher than normal rent, but that also meant that point K was unlike her other data points, meaning that it should be removed from the data set. Once point K was removed, the new regression line manifested as in the following graph: Once Jessica removed point K from the data, the regression line changed from g(x)=1.56x−280.75 (with Point K) to h(x)=1.36x−140.54 (without Point K). Like point L, point K was doing two things to the proposed function line. First, point K caused the right part of the line to be higher, thus causing the left part of the line to be lower. This is why there is a smaller y-intercept with point K included in the data set. Point K also made the line steeper because it pulled the right part of the line higher. If Jessica had used g(x) with point K included to predict the rent for a 2000-square foot house, she would again have overestimated the rent. Now that Jessica has removed all outliers, she may have a good regression function.

Choosing a Game to Invest In

In the last lesson, you saw how to solve equations using polynomial functions and interpreting those solutions in context. In this lesson, you will solve equations to find optimal solutions in real-world situations. Before jumping into that, consider this. If you have a smartphone, you probably have seen people playing some mobile games—maybe you even play some mobile games. If you play games on your phone, you know there are a ton of digital games out there. How can you compare two games to decide which one to promote? That is exactly the problem Macro Games was faced with. In January 2010, Macro Games launched two new online games, Instinct Fighters and Zoo Managers. Using data on the number of daily gamers, Macro Games created two functions that modeled the number of daily gamers. The function i models the number of daily gamers for Instinct Fighters while z models the same for Zoo Managers: i(t)=−3.09t4+77.2t3−645.9t2+2101.4t+981 , z(t)=16.61t3−329.42t2+1672.7t+1120, where t is the number of months since January 2010. Here are those two functions' graphs: In January 2011 (when t = 12), Macro Games had to decide which game it was going to focus on developing further due to budget constraints. According to the graphs above, the two games attracted similar number of new gamers in January 2010 and February 2010. From February to May 2010, Instinct Fighters attracted more gamers every day. For example, by i(3)≈3610 and z(3)≈3300, which represents March 1, you can tell that Instinct Fighters had 3610−3300=310 more gamers than Zoo Managers. Starting in late May, Instinct Fighters had fewer gamers than Zoo Managers. For example, by z(10)≈3700 and i(10)≈1510, representing October 1, you can tell Zoo Managers had 3700−1510=2190 more gamers. However, by the end of December, those two games attracted similar numbers of gamers again. From October to December, the number of gamers for Instinct Fighters recovered, while the number of gamers for Zoo Managers dropped. With these trends in mind, Macro Games had to decide which game deserved more investment to support an upgrade and new marketing. If the trend continues, Instinct Fighters would attract more gamers, while Zoo Managers would lose more gamers. All of this is why Macro Games chose to continue development for Instinct Fighters while stopping development on Zoo Managers. One thing to notice here is that Macro Games made this decision based purely on the input-output data for a function modeling the number of daily gamers over time for each game. This assumes that the input-output data for this function is a good indicator of what Macro Games wants from their games (that is, more daily gamers) and that these trends would continue in the future for each game if nothing else changed. The idea is that if this data indicates that Instinct Fighters is already the better game, then focusing resources there would lead it to be an even better game, thus attracting even more daily gamers. Basically, the input-output data from two functions allowed Macro Games to make an objective, informed decision on what to do. Lesson Summary In this lesson, you learned how to compare two functions' graphs, like those for Macro Games and for Sunrise Sky and Retreat Spas, to determine which function is optimal. Here is the key concept in this lesson: Given two polynomial graphs for two real-world situations, you can compare input-output pairs at a given point to see which situation might be ideal.

Comparing and Contrasting

In the last lesson, you saw how you could calculate market share for Sunrise Sky Spa. In this section, you will compare Sunrise Sky Spa with their its competitor, Retreat Spa. Comparing the two companies like this can show you how the two companies compete with each other. Retreat Spa started business around the same time as Sunrise Sky Spa. The market share of Retreat Spa in the city, in percentage, can be modeled by this function: R(t)=501+150e−0.9t, where t is the number of years since 2000 Look at their graphs: Recall that, in logistic functions of the form f(x)=L1+C×e−kx, L determines the function's maximum value. On the graph, you can see S(t) approaches 40, but R(t) approaches 50 as the x-value becomes larger and larger. This behavior can be explained by the different L values in the two functions. As for the value of k, it determines how fast a function grows. The larger k's value is, the faster the function grows in the middle segment. Since R(t)'s k-value is 0.9 while S(t)'s k-value is 0.7, Retreat Spa grew at a faster rate in the middle segment, reaching its maximum value faster than Sunrise Sky Spa did. On the graph, notice that a logistic function always has two horizontal asymptotes. As a logistic function's x-value becomes infinitely small or infinitely large, the function's output value becomes infinitely close to a certain horizontal line, which is called the function's horizontal asymptote. For now, focus on interpreting some of the information for Sunrise Sky Spa for real-world meaning. S(t)has two asymptotes: y = 40 (the function's maximum value) as the x-value becomes infinitely large, and y = 0 as the x-value becomes infinitely small. In terms of real-world meaning, this means that Sunrise Sky Spa will maximize its market share at about 40% of the market somewhere around 2014. Keep in mind this prediction is only accurate if there are no big disturbances to the market; for example, if Retreat Spa were to close all of its stores, then the 40% prediction of market share for Sunrise Sky Spa would no longer be accurate. Now turn to Retreat Spa. R(t) also has two asymptotes: y = 50 (the function's maximum value) as x-value becomes infinitely large, and y = 0 as x-value becomes infinitely small. These mean that in the worst-case scenario, Retreat Spa would hold 0% of the market share while they could hold 50% of the market share in a best-case scenario. In fact, it looks like Retreat Spa reached the 50% market share around 2011. Again, keep in mind that these values assume that there were no large changes to the "status quo." If there were large changes in the market, then the model would have to be reconsidered. In general, for f(x)=L1+C×e−kx, the function's curve approaches the asymptote y = L as its x-value becomes infinitely large, and the curve approaches the asymptote y = 0 as the x-value becomes infinitely small.

Interpreting Rates of Change for CPU Processors

In the last section you saw an argument on why you should not have models that present unbounded growth for the human population There are actually some limits on how the population can grow. Another area where there are some limits on growth is technology, but technology is not often thought of as being limited. You will now take a look at how one important part of technology does have its limitations: the central processing units of computers and smartphones. Central processing units (CPUs) have been getting faster and faster over the years. However, there is a limit to how fast they can operate because of the material used to make CPUs. Though CPU speed increased exponentially since the 1950s, the speed hit a bottleneck around 2005 because of these limitations, based on the materials used to make CPUs. This was not a surprise to engineers and computer scientists. You will see how the speed of CPUs was increasing steadily for a long time, and then how CPU speed was still increasing but at a slower and slower pace in more recent years. The speed of CPUs, in MHz, over the years can be modeled by the logistic function s(t)=50001+20000e−0.29t, where t is the number of years since 1970. The following is the function's graph: [The graph has Years Since 1970 plotted on the x axis and CPU Speed in Megahertz plotted on the y axis. A curve rises almost horizontally along the x axis in the second quadrant to about (20, 0), then rises to about (50, 5000), and then rises almost horizontally again through the first quadrant.]©2018 WGU, Powered by GeoGebra The following applet depicts the instantaneous rate of change for various years. Use the applet to determine the instantaneous rate of change for 2003 and then interpret the associated coordinate and instantaneous rate of change. For 2003, the corresponding t-value would be t = 33, since 2003 is 33 years since 1970. The associated coordinate at this t-value would be (33, 2087), which indicates that the speed of CPUs was about 2,087 MHz at the beginning of 2003. The instantaneous rate of change at this point is 352.61 MHz per year, implying that CPU speeds were increasing at 352.61 MHz per year at the beginning of 2003. Now use the applet again to see if you can find the coordinate and instantaneous rate of change in 2018. For 2018, the corresponding t-value would be t = 48, since 2018 is 48 years since 1970. The associated coordinate at this t-value would be (48,4911.52), meaning that the speed of CPUs was about 4912 MHz in early 2018. The instantaneous rate of change at this point is 25.21 MHz year, implying that CPU speeds were increasing at about 25.21 MHz per year in early 2018. By function s(t)'s graph, you can see that its instantaneous rate of change was small at first, became larger and larger, and then became smaller and smaller. The instantaneous rate of change in 2003 was much larger than the instantaneous rate of change in 2018, meaning that CPUs were improving in processing speed much more quickly in 2003 compared to 2018. This is the "leveling off," or limitations on technology that were referenced earlier in this section. Current technology can only be made to go so fast based on limitations described earlier. Instead of instantaneous rates of change, you might be wondering what the average rate of change can tell you. Recall that the average rate of change is a measurement of how two variables change with respect to each other over an interval. In this case, an average rate of change would indicate how CPU speeds were changing over a period of years. Suppose you wanted to know how CPU speeds changed from 2003 to 2018. The average rate of change can tell you this. See if you can calculate and interpret the average rate of change between 2003 and 2018. To do this calculation, you need the coordinates associated to 2003 and 2018, which were (33, 2087) and (48, 4911.52). You then substitute these values into the slope formula, and you should get a value of 188.304. You can see the visual representation of the average rate of change below as well as the associated calculation using the slope formula. This value indicates that CPU speed went up 188.304 MHz on average each year between 2003 and 2018. If the instantaneous rate of change at (22.5, 168.35) is 40.32, how do you interpret the instantaneous rate of change? The speed of new Intel CPUs was increasing at 40.32 MHz per year in mid-1992. In this function modeling the speed of new Intel CPUs, if the average rate of change from t = 20 to t = 25 is 42.304, how do you interpret the average rate of change? From 1990 to 1995, the speed of new Intel CPUs increased at an average of 42.304 MHz per year. Lesson Summary In this lesson you learned what an instantaneous rate of change and an average rate of change mean in a logistic function. An example of instantaneous rate of change could be that the speed of Intel CPUs was increasing at 357.05 MHz per year at the beginning of 2005. An example average rate of change could be that Pinnacle was losing, on average, 890 customers per year from 2004 to 2006. Here is a list of the key concepts in this lesson: An average rate of change represents how one variable changes with respect to another over an interval of values (typically an interval of value for the independent/input variable). An instantaneous rate of change represents how one variable changes with respect to another at a particular instant (typically an instant defined by a specific value of the independentvariable). The units for a rate of change are the dependent variable unit divided by the independent variable unit. So if a function measures CPU speed over time measured in years, the average and instantaneous rates of change are measured in CPU speed per year.

Identifying Asymptotes from a Scenario

In the last section you saw how to identify asymptotes from the graph of a situation. Now consider a written scenario and see what about this scenario indicates that there are asymptotes involved. Consider this example: Ancient Times trades vintage watches. For several decades, one model of a watch from the 1800s used to be valued very highly at around $16,000. However, around 2002, there was new information found about this model, and the value of this model steadily dropped over the next few years. Even though the new information found about this model of watch devalued the model of watch, the watches still held some of their value. From this scenario, you can tell there are two asymptotes in play. The first is the upper asymptote, where the watch was valued at $16,000 for several decades. Since the value of the watch is holding steady at this value for several decades, this indicates that there is an asymptote here. The upper asymptote is y = 16,000, since it was the peak value of the watch. As for the lower limit, the minimum value of the watch would be $0, but it is known that the watches still held some of their value. Exactly how much is not known based on the information given. Still, this is enough to know there was a lower asymptote, as well. Again, this is a two-limit situation, so a logistic function would be a good fit to model the data about the watch's value over time. After compiling data on the watches value and calculating a logistic regression model, this function models the watch's retail value, in thousands of dollars, since 2000, V(t)=141+0.001e0.9t+2, where t is the number of years since 2000. This function's maximum value is 14+2=16, and its minimum value is 2. Examine the following graph of this function. This function decreases because the value of the watch decreased. Also note that the upper asymptote is indeed y = 16, while the lower asymptote is y = 2. Lesson Summary In this lesson, you examined several situations and their associated logistic functions, learning that there is one key difference in the equations for increasing and decreasing logistic functions, as well as focusing on the minimum and maximum values in logistic functions. Here is a list of the key concepts in this lesson: A logistic function in the form of f(x)=L1+C×e−kt+m has two asymptotes, an upper asymptote at y=L+m and a lower asymptote at y=m. Given a scenario's upper and lower limits, you can figure out the values of L and m in a logistic function. Given a written scenario, you can determine if there are any asymptotes by seeing if the response variable tends toward specific values. If there is steady growth, that is a good indicator that there are no asymptotes.

Exponential and Logistic PatternsExponential and Logistic Patterns

In the last section, you saw a linear and a polynomial data set. Now it is time to consider exponential and logistic data patterns. Consider the following chart that shows the smallest micro-transistor size since 2000. Judging by the shape of this data, it might be tempting to use a polynomial function. However, remember that polynomials always tend toward infinity or negative infinity. That would not fit here, as it seems like the size of the micro-transistors is tending toward zero but will never reach zero; their size will never be "nothing." For that reason, either an exponential or logistic function would work. Keep in mind that exponentials always have one horizontal asymptote, because they have onenatural limiting factor; while logistic functions always have two, because they have twonatural limiting factors. In this scenario, there is only a limit on how small the micro-transistors could be, not on how big they could be. That means that an exponential model would be more suitable here than a logistic model. Also, it is well known that Moore's Law predicts exponential development patterns in the IT industry. Recall that exponential functions have a constant ratio. Examine the following examples of an exponential function's graph. Finally, consider a logistic data pattern for the micro-transistor data set. A logistic pattern would not fit because logistic functions always have two asymptotes, which usually implies two natural limiting factors. Logistic data patterns are also generally S-shaped, which this data set is not. Examine the following data about some rabbits that were introduced into a forest in 2000. The chart shows the estimated number of rabbits in the forest since 2000. Do you see the "S"? Only a logistic function can model S-shaped data like this. Notice the rabbit population is approaching an upper limit (a maximum population), which is due to natural resource limitations in the forest. Similarly, back in time, there is also a lower limit (a minimum population), which is 0. In general, logistic functions approach a limit in the long run, making it a perfect fit to model such data. The next graphs are examples of a logistic function's graph. While the lower limit in both of those logistic functions is 0, keep in mind the lower limit for a logistic function can be any value, positive or negative.

Outliers in Non-Linear Regressions

In the last section, you saw how outliers affected linear regressions. In this section, you will see that outliers affect all regressions. Consider this example: Gold Plains sells organic milk. Each time the company raises prices, fewer customers buy its milk. Gold Plains has been trying different prices in order to achieve the highest possible daily revenue. The next graph is a scatterplot of the price-to-daily revenue relationship, using a polynomial regression model. Given the points in the data set, why did Gold Plains use a polynomial regression model instead of a linear model? It is true that the data points appear fairly linear, but a linear model would indicate that the price of milk could go up, along with revenue, indefinitely. However, this is not true; there comes a time when a company can price itself out of its market with customers unwilling to pay. Therefore, Gold Plains knows that there is a downward turn somewhere in the data, even if no one can see exactly where it is based on the data so far. This is precisely why the company needs a regression function. The regression function allows management to predict where this downward turn (which could be called "the sweet spot" price) is in the data without having to price-gouge customers to find it. In the regression function, notice that point A is a possible outlier, because it is separate from the rest of the data. Upon investigation, the company found that data associated with point A was actually mis-recorded; instead of (1, 240), this data point should have been (1, 250). Examine this data in the following applet. Click on point A and adjust it to about (1, 250) and watch the impact on the regression function and graph. As you move point A upward, notice the function becoming wider and wider and the equation of the function changing as well. This is why adjusting for possible outliers is so important, even for polynomial regressions. Notice that point F is also a possible outlier. One of Gold Plains' major competitors had delivery problems on that day, so customers bought more products from Gold Plains and that increased Gold Plains' revenue. Therefore, point F is a true outlier and should be removed from the data. The following graph is the regression with point F removed. Now, with both points A and F removed, this looks like a strong regression equation.

Predicting Future Budgets with Extrapolations

In the last section, you saw how you could use the correlation coefficient to make some reasonable conclusions. In this section, you will learn how to use extrapolations to make some reasonable conclusions, such as predicting future budget needs. Consider this example: Randall Computers needs to create a budget for the services department. In 2015, the company gathered data from previous years and used it to create a regression model to predict future years. The following scatterplot shows Randall Computers' funding for its services department from 2000 to 2015 with a polynomial regression. [The graph plots Years since 2000 on the horizontal axis and Funding in Services Department in Millions of Dollars on the vertical axis. A curve rises with decreasing steepness from (0, 9.6) through (16, 15.2). A series of data points closely follows the curve and the equation reads near the curve reads: f of x equals negative 0.01 times x squared plus 0.53 times x plus 9.63 and r squared equals 0.98.]© 2018 WGU, Powered by GeoGebra With 16 data points, no possible outliers, and a correlation coefficient of 0.99, this regression is strong and could be used to answer the question of funding predictions in 2016 using an extrapolation. Remember that polynomials should not be used to extrapolate too far into the future or the past since all polynomials tend toward infinity or negative infinity for long-term trends. Since x = 16 is not too far into the future, it is possible to predict funding for the services department in 2016 by estimating the coordinate from the graph. Since there are 5 grid marks between 15 and 16 on the y-axis, each grid represents 15=0.2. Therefore, the coordinate is approximately (16, 15.15). This means the services department will need about $15,150,000 for its budget in 2016 if the general trend were to continue.

A Better Model for Global Population

In the last unit you saw an exponential model for the global population that predicted unlimited growth for the global population for humans. The problem is there are limits on the growth of the population. If nothing else, think of it as a finite amount of space on the planet, however there are lots of other limitations, like growing enough food, water, and shelter for everyone, producing enough electricity for everyone, and other concerns. In short, there is an upper limit to the global population for humans, and logistic functions naturally have an upper limit. Revisit the human population model to see how logistic functions can do it better than exponential functions. From the last unit, you may recall that the human population, in billions, can be modeled by the exponential function p(t)=0.0000021399×e0.0148317t, where t is the number of years since the year 1000 in the Common Era. The following is the graph of this function: First, look at the instantaneous rates of change involving the points A and B in the previous graph. Point A (1000, 5.91) implies that the world population was 5.91 billion at the beginning of 2000. At this point, the instantaneous rate of change was approximately 0.088 billion years. This instantaneous rate of change means that, at the beginning of 2000, human population was increasing at the rate of 0.088 billion per year, or 88 million per year. Second, point B (1010, 6.86) implies that the world's population was 6.86 billion at the beginning of 2010. At this point, the instantaneous rate of change was approximately 0.102 billion years. This point implies that at the beginning of 2010, human population was increasing at the rate of 0.102 billion per year, or 102 million per year. This means that in just 10 years, the instantaneous rate of growth for the human population went up by 14 million more humans per year. What can the average rate of change tell you over the interval between points A and B? Well, using the slope formula, the average rate of change of human population from 2000 to 2010 was 0.095 billion years, which implies that the human population was increasing at an average rate of 95 million per year from 2000 to 2010. All of these interpretations are based on an exponential function. How might the interpretation of rates of change differ if a logistic function is used? The answer is that the interpretations actually will not be different at all. You will still interpret rates of change exactly the same way. That said, you will get slightly different rates of change when using a logistic function to model the global population. In this unit, you will see similar applications of those concepts for logistic functions, and you will see that the interpretation is exactly the same. You will revisit this problem with a logistic function later.

Given a real-world situation that can be modeled with an exponential function, interpret the corresponding asymptote.

In the middle of the last century, something called a "computer" was invented. The machine occupied a large room, cost as much to maintain as a medium-sized town, and could add, subtract, multiply, and divide. The idea that someday in the future, individual citizens would own or even be able to lift a computer was laughed at. People could not even imagine why they would want such a thing. Since then, computers have become much smaller and their capabilities have grown immensely. In this course, you will see a graph with an old Commodore 64 as the starting point and an asymptote showing the rapidly growing increase of computing power. In the last lesson, you saw how some asymptotes were identified and estimated using the graph. In this lesson, you will get more practice with that. More importantly, you will work on interpreting asymptotes in context.

Modeling Real Data with Polynomials

In the previous lessons, you have seen how you can use polynomial functions to answer questions about real-world situations, but you may have been wondering where these functions come from. It is not that these functions are just pulled out of thin air—in many disciplines, data is collected and then a process called regression is done to find a particular function that fits the data well. You will see how that process works in this lesson. Consider this example: Youth Again, a mall management company, opened a shopping center in a new city last year. The following table shows the number of customers who visited the new shopping center on the first day of each month: Dates Number of ShoppersFebruary 11400March 12600April 13000May 13250June 13000July 12500August 12100September 11700October 1200November 11250December 11300January 1 of the following year1650 By turning the data into coordinates, Youth Again created what is call a scatterplot of the data. A scatterplot is a graph of what may seem like "random" coordinates. Notice that February 1 corresponds to x = 1, March 1 corresponds to x = 2, and so on. You can see two turns in this data, meaning that a 3rd-degree polynomial is probably a good candidate to model this data. Be sure to notice that the point at month 9 is away from the general trend in the data. To predict values in the future, it would be ideal to find a function that passes through all the data points. However, that rarely happens in real life, so instead, just try to get as close as possible without making the function more complex than necessary. The best you can usually do when working with real data is to find a "curve of best fit," which helps you to know, on average, what the behavior of the data is. In other words, usually some data points are above the curve, some are below it, and some other points happen to be exactly on the curve. When you model a data set with a curve of best fit, you are doing a data regression. One other note before continuing: There are actually many techniques for estimating curves of best fit, and mathematicians and statisticians often debate on which is the best technique. In this course, do not worry about those debates. Instead, you will focus on the most widely accepted and used technique: the least-squares regression (LSR) algorithm. You will not be looking at how that process works (and it will not be on the assessment), so your focus is purely on interpreting the results of an LSR to make sense of data. In the applet, note that the point (10, 1250) is a data point. However, if you use the function to estimate the y-value when x = 10, you have f(10)=886. There is an error of 1250−886=364. Do not worry; this is totally normal in data regression. The curve of best fit is just a "best guess." It is normal to see a difference between the estimated value and the real value. Two final things to note here: First, note that Point I in the applet is very far away from the regression function. A point like this is called an outlier because it lies outside the trend. You will learn how to handle outliers in the next lesson. Second, note that the regression function above matches the general trend of the data; that is, the regression function above does not do something outside the indications of the data either to the left of x = 1 or to the right of x = 12. Some examples of regression functions that behave "weirdly" outside of the data values will be shown next. This first example is "weird" because the data does not indicate there should be an increase to the right of x = 12. Also, there is no reason for the "upswing" in y-values to the left of x = 1. This second regression function is "weird" because there is an upswing to the left of x = 0 and to the right of x = 4. There is nothing in the data to indicate this general trend, which means this is not a good model in terms of fit. Also, this data only has one turn, so a 2nd-degree polynomial should be used, but instead, a 4th-degree polynomial was used, which was a mistake. This last one may seem fine at first glance, but notice how there are no turns in this data. Zero turns means that a degree 1 polynomial, or a linear function, is the simplest and should generally be used instead of the degree 4 polynomial pictured next.

Estimating Additional Help Needed for Peak Times

In the previous section, you saw how you could still use average and instantaneous rates of change for line graphs. In this section, you will see the graph of a function you have never worked with before. That does not mean you cannot use the average and instantaneous rates of change to understand how things are changing with this new, unknown function. Consider this example: Despite a few cautions, rates of change are useful in analyzing and understanding many situations. Think about the number of IT help calls that occur when a new software product is released. At first, only a few calls come in. Why? There have only been a few sales, so there are not many users yet. As more sales occur and the software is used by more people, the calls increase. Then as time goes on, help calls taper off because users have learned how to work the software and have fewer questions about it. Consider the following graph of IT help calls for a new database program over the first 90 days after its release. [The graph has Days plotted on the x axis and Calls per Hour plotted on the y axis. A curve rises with increasing steepness from the point (0, 0) to the point (22, 135.6), then slopes downward through (40, 80), and approaches the x axis at x equals 90. A lines passes through both the points. Text reads: Average Rate of Change equals 135.6 minus 0 all over left parenthesis 22 minus 0 right parenthesis equals 135.6 over 22 equals 6.2. ]© 2018 WGU, Powered by GeoGebra Even though the equation for this function is unknown, you can still determine the average rate of change for calls per hour per day, from the release date until the calls maxed out, use the points (0, 0) and (22, 135.6): For example, at t = 0, or at the point (0, 0), the instantaneous rate of change is 10 calls per day. This number, 10, implies that on the instant of the software's release, the call center is expected to receive 10 more calls per hour per day. What about the instantaneous change on the peak day, day 22? When the point is at (22, 135.6), the instantaneous rate of change is 0.2 calls per day. The number 0.2 implies that on the 22nd day after the release date, the number of calls is staying about the same, or that there is essentially no change in the number of calls coming in about the new software. It makes sense that on the peak day, the number of calls is not going up anymore and has not started going down yet. It is not until the next day, day 23, that things start to drop off. Compare this with the average rate of change from day 0 to day 22; that average rate of change was 6.2 calls per hour per day on average. Average and instantaneous rates of change focus on different things. The average rate of change focuses on the average change over a period, while the instantaneous rate of change focuses on the trend of change at a particular instant. m=y2−y1x2−x1=135.6−022−0=135.622=6.2 In practical terms, 6.2 means that between day 0 and day 22, there were an average of 6 more calls per hour each day. This means that on day 22, about 135 calls per hour should be expected, but from day 0, the calls are increasing by about 6 more calls per hour as each day goes by. Now try calculating the average rate of change between day 22 and day 40. To do this, move the points in the applet to the corresponding places for t = 22 and t = 40. In the following applet, as you drag those two points along the function's graph, you will see different values in the average rate of change. Lesson Summary In this lesson, you learned how to calculate the average rate of change and how to interpret both average rates of change and instantaneous rates of change. The two types of rates must be differentiated because they mean very different things. Here is a list of the key concepts in this lesson: An average rate of change tells how fast something is increasing or decreasing on average over a period of time. An instantaneous rate of change tells how fast something is increasing or decreasing at a particular moment. The average rate of change can be determined by computing the slope of the line through two points at different times. A positive average or instantaneous rate of change means that the function values are increasing, while a negative average or instantaneous rate of change means that the change is decreasing. An instantaneous rate of change should not be used to predict values in the distant future. You can visually estimate a greater instantaneous rate of change by visually comparing the lines representing the instantaneous rate of change. The steeper line will represent the larger instantaneous rate of change.

Number of Viewers

In this course, you have encountered quite a few models related to various situations. Until now, you have mostly assumed that those models were created and used in a proper manner. However, sometimes models are not used in appropriate manners. In this lesson, you will learn some basic techniques to determine if models meet some basic requirements to be proper to use. This module will give you a set of four tools to ensure that models are calculated to a minimum standard and that the results or conclusions drawn from a model are at least reasonable. You can remember these four tools by keeping in mind that there are SOME things that must be checked for every model: S: sample size (about 10 or more data points) O: outliers M: model strength and model choice E: extrapolations, if any Netterly's new TV series started off well, with critical approval and strong audience interest. However, the number of viewers dropped with every episode. The following scatterplot shows the number of viewers of the first five episodes with a regression line. The correlation coefficient is −0.83, implying a strong correlation between the number of episodes and the number of viewers, but the general trend is definitely down. One of the executives at Netterly suggested that if this trend were to continue, there would be essentially no viewers by the sixth or seventh episode. Is this conclusion reasonable? Probably not. Look at the first tool among SOME things for this model. It is immediately evident that there is not an adequate sample size here; there are only five data points, not even close to the recommended 10 or more data points. That indicates that the executive's fear may be premature. If there were a more long-term pattern showing a decline in the show, then the executive might worry about the show's future. With the current data, though, there is nothing to suggest the show's producers cannot turn things around. Also, even though the correlation coefficient is strong, it is too early to interpret it; without an adequate sample size, you really cannot draw conclusions. For example, the next few episodes could very well win back previous viewers and add more viewers. This means that the show could actually level off to about 1.5 million viewers, and that would be a much better trend than the executive's prediction. Here is another situation to explore. Moon Labs has been trying to cut down on its traveling expenditures by employees. In 2005, the company started a program to reduce employee travel expenditures, and the company has been tracking this data since. Using the data, the chief financial officer (CFO) built a regression model, E(t)=112.05e−0.05t, where t measures the time since 2005, in years, and E measures the expenditures per trip. In this month's budget meeting, the CFO claimed that by 2025, Moon Labs will have employee expenditures per trip down to about $40. She bases this claim on the following model, since E(20)=41.22. Is this a reasonable conclusion? It is not a reasonable conclusion because when you check SOME in this model, you find that for S, there is an adequate sample size. However, for O, there is a possible outlier that has not been explained or removed from the data set. The possible outlier at t = 9 needs to be explained or removed from the data set before doing anything further with the model. If that data point truly is an outlier, or if influences outside the variables themselves caused this value to be low, then the data point should be removed. If the value at t = 9 was lower simply due to employees' hard work to lower expenditures that year, then the data point should be kept. Remember to only throw out true outliers, values that are away from the norm for reasons outside the influence of the variables. Either way, this possible outlier pulled the regression function down since it is located below the rest of the data, meaning that this model may be projecting lower expenditures than are realistic. The conclusion for Moon Labs depends on whether the point att = 9 should be removed or not. This outlier needs to be addressed or removed from the data set before anything else can be done with this model. At the next monthly budget meeting, the CFO announced that the data point was truly an outlier; it corresponded to 2014, when the company had a severe drop in market value and cut back strictly on employee travel. Therefore, the point was removed and the following new regression model was calculated without the outlier. The CFO claims that she now projects that the expenditures per trip will be about $48 by 2025. Is this conclusion valid? Before answering, check out SOME: For S, there is an adequate sample size here. For O, there are no remaining possible outliers. For M, model strength seems strong since r2=(0.95)2 = 0.90. It also seems that the choice of an exponential function here is appropriate since expenditures per trip could be reduced at a constant proportion over time. For E, the extrapolation at t = 20 is a bit far into the future. In fact, the furthest to extrapolate before reaching extreme extrapolation values is x max +0.50 × range=12+0.50×12=18, which would be 2023. Recall that regression professionals can extrapolate further into the future with adequate data than a nonprofessional can. Perhaps t = 20 is not too much of a stretch since the extreme extrapolation values started at t = 18. However, be careful with this extrapolation since the model the CFO reported on at the last meeting included an outlier. In terms of this course, bear in mind that there is no definitive way to know if a prediction is reasonable or not. There are more advanced regression techniques that could analyze the reasonableness of this conclusion, but those techniques are beyond the scope of this course.

PC Acquisition Plans

In this lesson you will get practice on putting all of the skills for rates of change for logistic functions to use. Specifically, you will see how to identify an ideal situation using various aspects of rates of change in context. Highcrest Realtors is planning to replace most of its personal computers (PCs) nationwide. Extensive testing is needed at the initial stage to make sure there are no problems in the deployment, so progress should be slow at first. Eventually, however, the function for this initiative should approach the total number of PCs to be replaced. The nature of the project makes a logistic function a good fit to model the number of new PCs to be installed. The number of PCs installed for Plan A and Plan B can be modeled by the following functions a(t)=82361+2059e−0.35t−4, a(t)=82361+2059e−0.25t−4. The following graph depicts these functions. On the left part of the graph, when t < 5, both functions increase very little, with rates of change close to 0, implying that few new PCs would be installed in the first few days. Most of the time would be spent on planning and testing. In the middle part of each function's graph, as judged by the steepness of lines in the graph, the instantaneous rate of change of a(t) is larger than that of b(t), implying that computers are being installed at a faster rate under Plan A than under Plan B. This can be verified by the fact that a(t) reached its upper limit earlier than b(t), and that the number in a(t)'s exponent, 0.35, is larger than 0.25 in b(t). On the right part of the graph, when t > 55, both functions increased very little again, implying that all the new PCs have been installed. Which bidder should Hillcrest choose? If management wants to save time in this project, it should choose Plan A, which installs PCs faster. If they want to ensure the installation of the new PCs goes well to minimize impact on their IT department, it should choose Plan B. Overall, management at Hillcrest decides to go with saving time, or Plan A. Lesson Summary In this lesson, you not only compared two logistic functions' rates of change, but you also interpreted what the numbers meant in context and chose which function was optimal. Here is a list of the key concepts in this lesson: Both long-term and short-term trends in how logistic functions change can be measured by their rates of change. Comparing the rates of change can help you identify an optimal scenario.

Solving Polynomial Equations Using the Graph

In this lesson you will see how to solve polynomial equations using a graph and what those solutions can mean in context. Consider this first example: On a typical day, the number of customers at Scarlet Dragon Chinese Restaurant can be modeled by this function: c(t)=−0.2t4+4t3−26t2+63t, where t is the number of hours since 10:00 a.m., when the restaurant opens. This function is depicted in the following graph: In previous lessons, graphs were used to estimate the output given an input. For example, at 11:00 a.m. on a typical day (or t = 1), you can tell that the restaurant has approximately 41 customers because the function crosses the point (1, 41). You can also verify c(1)=41 if you want a more exact way of determining this value. Remember: The Scarlet Dragon's main dining room only seats 60 people; the overflow area seats additional people, but it is not the preferred space. The question is, "During what hours of the day should the Scarlet Dragon's staff plan on using the overflow area?" In this case, you need to estimate an input given an output, because 60 people refers to the dependent variable, c. In other words, what input values would make the function's value 60? To find the answer, solve for t in c(t)=60. First, focus in on where the dependent variable is equal to 60, or where c = 60. In the following graph, a green horizontal line (a "trace" line) helps you focus on these specific values. You can then find where this line hits, or intersects, the function, which in this case is actually at two different coordinates. The two coordinates are the solutions. Remember that you are looking for an input value given an output value, so now you need to identify what the corresponding input values are for these two coordinates, or solutions. To do that, you next trace these coordinates down to the independent variable axis. If you do that, the results will appear like the following graph: From this graph, it looks like the corresponding independent variable values would be about t ≈ 7.1 and t ≈ 9.25. This means that the graph passes through the points (7.1, 60) and (9.25, 60). In function notation, this would mean that c(7.1)≈60 and c(9.25)≈60. You will work on interpreting these solutions in just a minute. First, check these solutions. This can be done by substituting t = 7.1 into c(t) and then you have =−0.2(2,541.1681)+4(357.911)−26(50.41)+447.3 You did not get exactly 60 because you should expect some degree of error when you do estimations by looking at a graph. However, the result (60.05) is very close to 60, so you should feel pretty confident about t = 7.1 as an approximate solution. Checking c(9.2)≈60 is left as a question for you to try on your own. There is one other thing to keep in mind: For the assessment, you should be able to estimate the input values without the "trace" lines. You should be able to make a reasonable approximation, but you will not have to estimate these solutions with high accuracy.

Summary

In this lesson, you focused on reading and understanding tables of data, such as the increase in number of internet users over time. In addition, you encountered the idea that the same information, or data, can be expressed in different ways - tables, graphs, and function notation - yet convey the same meaning. Here is a list of the key concepts in this lesson: The same data can be represented in a table, as ordered pairs, or on a graph. Regardless of how the data are presented, the meaning is the same. Once data is plotted on a graph, it is possible to draw reasonable conclusions about data points that don't actually appear on the graph if they follow a predictable pattern, or trend. It is sometimes helpful to rescale data to make the numbers easier to work with. It is also sometimes helpful to use function notation so that each input value pairs with a single output value. Independent variables refer to the factor that explains the change in the other. Dependent variables refer to the factor that responds to the changes in the independent variable. Functions can be shown in table form, and tables can be read from right to left or left to right, depending on the situation. If a table is read right to left, you are investigating the inverse of the function that is read left to right.

Lesson Summary ....

In this lesson, you identified a function's concavity and interpreted it in context. You learned that a function's concavity discloses a lot of information about a function's rates of change. Here is a list of the key concepts in this lesson: Concave up can happen in two ways—either when a function's values increase faster and faster or when they decrease slower and slower. Concave down can happen in two ways—either when a function's values decrease faster and faster or when they increase slower and slower. Concavity also can show how things are turning around in a given situation, such as slowing a company's losses (concave up) or seeing the usage of a server drop off (concave down).

Lesson Summary .......

In this lesson, you learned how to decide whether a concave up or concave down curve, or portion of a curve, is a better option given different contexts. You saw that there can be a best scenario and a second-best, as well as a worst scenario and a second-worst. Here is a list of the key concepts in this lesson: These are the possible scenarios for any function with concavity: concave up: increasing faster and faster or decreasing slower and slower concave down: decreasing faster and faster or increasing slower and slower The best option for the given situation may be either concave up or concave down. It is the perspective from which the options are viewed that determines the best option.

Calculating Exponential Outputs

In this section, you will learn how to calculate an exponential function's output. Remember that in order of operations, you calculate exponents before multiplication. Imagine you are calculating the area of two square animal pens. Each pen's side length is 3 yards, so you can calculate each square's area with ​32=9 square yards. Since there are two square pens, you need to double the area and have 2×9=18 square yards as the total area of those two pens. [Image shows calculating the area of two square animal pens. Each pen's side length is 3 yards, so you can calculate each square's area with 3^2=9 yd^2. Since there are two square pens, you need to double the area and have 2⋅9=18 yd^2 as the total area of those two pens. ]©2018 WGU If you calculate their total area in one expression, you have: total area = 2×32. Should you calculate the multiplication or the exponent first? If you understood the pen area problem, you know 32 represents each square's area, which should be calculated first. Then, double 9 to get 18 as two pens' total area. The correct way to calculate the total area is: 2×32=2×9=18 If you performed the multiplication first, you would get 62=36 square yards, and the result would not make sense. Another way to think about this is that exponents are shortcuts for writing multiplication. For example, 34 is another way of writing 3×3×3×3. In that sense, the shortcuts should be simplified for multiplication (that is, exponents) before the multiplication itself can be done. Either way, this is why the order of operations says you must calculate exponents before multiplying. Go back to the example involving Ping and Faye's money. Recall the function which models Faye's money: F(x)=10,000×1.01x, where F(x) is the amount of money in Faye's account in dollars, and x is the number of months. To calculate the amount of money in this account after two months, substitute 2 for x in this function, and you have: F(2)=10,000×1.012=10,000×1.0201=10,201 Note that you calculated the exponent (1.012=1.0201) before multiplication (10,000×1.0201=10,201). The result tells you that Faye would have $10,201.00 in her account after two months. This is a little bit more than Ping's money after two months, which is $10,200.00. That extra dollar may not be so impressive now, but how about 100 months later? Substitute x = 100 into F(x), and you have: F(100)=10,000×1.01100≈10,000×2.704814≈27,048.14. Again, you have to calculate the exponent first (1.01100≈2.078414) before multiplying (10,000×2.704814=27,048.14). Faye would have a total of $27,048.14 in 100 months. Compare Faye's total after 100 months with Ping's, who would have 10,000+100×100=20,000 dollars after 100 months. Faye has over $7,000 more than Ping has. Can you see the power of compound interest (and exponential functions) now? In general, exponential functions outgrow linear functions. Calculate 5×23. 5×23=40 You should calculate the exponent before multiplying: 5×23=5×8=40 The following function models money, in dollars, in an account: F(x)=100×1.06x, where x is the number of years since the money was put in. Find the amount of money in the account when x = 2 and interpret its meaning in this context. F(2)=112.36. This implies that the account will have $112.36 two years later. Substituting x = 2 into F(x)=100×1.06x , you have F(2)=100×1.062=100×1.1236⋯=112.36.

Interpreting Rates of Change Continued

In this section, you will learn more about interpreting average rates of change. Remember that an average rate of change is a measurement of how one variable is changing with respect to another variable over a specific interval. Consider this weight-loss example. Say that you started a diet program and in week 1, you weighed 220 pounds. Then say that by week 4, you weighed 214 pounds. The average rate of change over the two specific values: (1, 200) and (4, 214) is: m=y2−y1x2−x1=214−2204−1=−63=−2. How can this calculation be interpreted? Remember, the units of the average rate of change are determined by the x- and y-variables. Since the slope, m, is measured as "change in y" divided by "change in x," you can just divide the units of the y-variable by the units of the x-variable to see what your average rate of change is measuring. In this example, the y-variable measures pounds, while the x-variable measures weeks. That means the average rate of change measures "pounds per week." The result of -2 means that you were, on average, losing about a two pounds per week from week 1 to week 4. Now, look at this example: For extra money, Demi has started a dog-grooming service. She has tracked her profits for the first six months of the year, shown in the table and graph below. What is the average rate of change in profit between month 4 and month 6? x = Month123456y = Profit$4$6$20$52$108$194 A A graph is not needed for this calculation; the slope formula can be used to calculate this with the coordinate (4, 52) and (6, 194): m = change in y change in x= y2 − y1x2 − x1 , m = (194−52) (6−4) = (142) (2) = (71)(1) = 71 . As far as interpreting this number, recall that an average rate of change is how one variable changes with respect to another over a specific interval. In this example, you can see how the y-variable (profit) changes with respect to the x-variable (months) over the fourth and sixth months. That means the slope of 71/1 or 71 shows that profit went up on average $71 per month between month 4 and 6. You may be wondering why it is important to say "on average" here—that is because between months 4 and 5, Demi's profit went up $56 (found by 108-52) while between months 5 and 6, Demi's profit went up $86. The number "71" is meant to represent how Demi's profit went up over the two months between months 4 and 6, however. Average rate in profit between months 4 and 6 is $71 per month. To verify this graphically, check out the following applet. The following graph is a polynomial function which models Demi's profit over the months. See if you can manipulate the applet to find the average rate of change between months 4 and 6. [The graph shows a polynomial function comparing months on the x-axis versus profit on the y-axis. The function generally increases from left to right. The average rate of change between points A at open parenthesis 0 comma 8 close parenthesis and B open parenthesis 1 comma 4 close parenthesis is negative 4 dollars per month. Moving point A to x = 4 and point B to x = 6 results in an average rate of change of 71 dollars per month.] Set the sliders to a = 4 and b = 6 or vice-versa. The average rate of change should be 71 either way. The "1" in the triangle representing the average rate of change is a reminder that the ratio for the variables here is 71:1. This reinforces the fact that for each month that goes by, Demi saw an average increase of $71 in profits from months 4 to 6. You can also use the slider to calculate the average rate of change for many other intervals. Below are some intervals, their average rates of change, and their interpretations. See how you are doing using the applet above to interpret these average rates of change. Interval Average Rate of ChangeInterpretation [1, 4]16Between weeks 1 and 4, Demi saw an increase of profit by about $16 per week. [0, 1]-4Between weeks 0 and 1, Demi saw a decrease of profit by about $4 per week. [Note: Interpret week 0 as being the initial day that Demi opened her dog grooming service. A negative average rate of change here means she must have had a busy opening day but a slower end to the first week.] [2, 6]47Between weeks 2 and 6, Demi saw an increase of profit by about $47 per week. [3.5, 5.5]56.75Between weeks 3.5 and 5.5, Demi saw an increase of profit by about $56.75 per week. [Note: Interpret the value "3.5" as meaning "halfway through the third week." Similarly, "5.5" should be interpreted as "halfway gh the fifth week." The height, in feet, of an arrow shot straight upward from the ground is given by h(t)=−16t2+48t. Calculate and interpret the average rate of change between the initial firing of the arrow (t = 0) and 3 seconds after firing (t = 3). When you compute 0−03−0, you will get 0/3, or 0. With the units of the y-variable (feet) and the units of the x-variable (seconds), it is clear this is measuring 0 feet per second. To get an average rate of change of 0, this must mean nothing changed overall between t = 0 and t = 3, which is true since the arrow is at height 0 at both times. The processing time a CPU needs to complete a process (measured in seconds) depends on the processing power of the CPU (measured in gigahertz). This means that a better processor will get a job done faster. If the processing time is 1 second for a 2.7 gigahertz processor while it is 3 seconds for a 1.5 gigahertz processor, what is the average rate of change between these values and what does it mean? Average rates of change provide a way of looking at change over a specified interval. This helps track change over time. However, what if you needed to know how things were changing at a specific instant? This is where instantaneous rates of change are helpful, because they measure change at a specific instant. Here are some scenarios when knowing the rate of change at a specific instant would be helpful and what it could be used for. Keep in mind that similar to average rates of change, an instantaneous rate of change measures how one variable is changing with respect to another, but instantaneous rates of change do so at a particular instant instead of during an interval. Instantaneous Rate of Change Used for...Measuring how many approved mortgages per month there are An indicator of how the mortgage and real estate markets are recovering. A higher number of approved mortgages per month could mean those markets are improving or recovering. Measuring how many IT help tickets per day are submitted A sudden or instantaneous increase in the number of IT help tickets could mean that a recent update rollout did not go as expected. Measuring the income per person across the United States Income is an indicator of how well the economy is keeping up with inflation, cost of living, and other economic factors. How fast you are driving in your car This is used for making sure you do not get speeding tickets! You should know how instantaneous rates of change look graphically. An average rate of change was the slope between two points; an instantaneous rate of change is a slope at one specific point (the point corresponding to one instant). For example, you may remember Alvin's roller coaster simulator. Alvin modeled the height of his roller coaster using the function h(t)=0.2t2−1.6t+3.2 where t is time in seconds and h is the height in feet. The average rate of change from t = 0 to t = 4 was equal to -0.8 feet/second, represented by the slope of the line on the following graph. The processing time a CPU needs to complete a process (measured in seconds) depends on the processing power of the CPU (measured in gigahertz). This means that a better processor will get a job done faster. If the processing time is 1 second for a 2.7 gigahertz processor while it is 3 seconds for a 1.5 gigahertz processor, what is the average rate of change between these values and what does it mean? The average rate of change is -1.7 seconds per gigahertz, meaning that processing time goes down by 1.7 seconds for each additional gigahertz of processing power. This means that the processing time is going down by 1.7 seconds per gigahertz of processing power. Note how the instantaneous rate of change is measured as the slope of the blue line above, which only intersects the roller coaster curve (in red) at point A. This is what was meant earlier when it was stated that the instantaneous rate of change is the slope at one point. Essentially, think of the instantaneous rate of change as a measurement of how things would continue to change a particular instant if no other changes happened beyond that instant. That is why it is called an instantaneous rate of change. It measures how things change at that point while ignoring future changes. You should get an instantaneous rate of change of "-1.2" when t = 1. To see this, set a = 1 and you should see the corresponding instantaneous rate of change of -1.2. But how is this particular instantaneous rate of change interpreted? It means that at this instant (when t = 1), the roller coaster is going down at a rate of 1.2 feet per second. This is going down slower than when t = 0. Recall that the instantaneous rate of change when t = 0 was -1.6, meaning the roller coaster was going down at a rate of 1.6 feet per second when t = 0. If you think about the curve as a roller coaster, this makes sense; the roller coaster is going down more steeply when t = 0 compared to t = 1. The instantaneous rates of change give an exact measurement of how much faster the roller coaster is going down (or how much steeper it is) when t = 0. Suppose Alvin is interested in what happens at exactly 3 seconds into the ride. Use the applet to find and interpret this instantaneous rate of change. Does the line in the graph represent an instantaneous rate of change or an average rate of change? What are the time value(s) of interest for this particular rate of change? This line represents an instantaneous rate of change since the slope is calculated at one point, specifically when t = 2.5. To be an average rate of change, the slope would need to be calculated at two points. This line represents an average rate of change since the slope is calculated at two points, specifically when t = 1.5 and t = 3. An instantaneous rate of change would calculate the slope at one point.

Identifying Asymptotes in Real-World Scenarios

In this section, you will look at some real-world scenarios and determine if the variables in the situation could involve an asymptote. Sometimes, this is very hard or impossible to do based on the scenario alone. In many cases, you need data or a graph to definitively determine if there is an asymptote. However, asking if variables have limitations on them is always a good practice. Consider this scenario: Sarah has been on an "unlimited" data plan with her cell phone carrier. However, she has noticed that as her data usage goes up each month, her download speed severely slows down. Does Sarah truly have unlimited data? Actually, probably not. Some cell phone carriers have been accused of throttling data on unlimited data plans—or slowing down data—after people on unlimited plans have used a large amount of data during a billing period. This is actually a good example of a real-world situation that has a natural limitation—that is, on how much high-speed data usage customers can use each month. Here is another scenario to consider: Janice manages the travel budget for a company where employees travel frequently to visit clients. She has been trying to keep company travel expenses to a minimum, starting by finding cheaper travel options. Since the company relies on personal relationships with clients, which means visiting them, there simply must be some travel expense. However, as Janice knows, there is a limit on how low she can drive down company travel expenses. Said another way, she finds that the y-values, meaning travel expenses, tend toward a certain value as she drives costs down. Here is another example: Consider your own height over the course of your lifetime. As you grew, you got taller, but your height did not increase forever. Our bodies have natural limitations on them, and eventually, humans hit a natural limitation on their height. Said another way, if height is related to time in years (meaning age, in this case), then the y-values tend towards a specific value as time goes on. On a graph, this y-value would be on the right side. Here is one last example: The speed of light is really fast but it is also fixed—that is, light moves at one and only one speed. Electricity and electrons are very similar; there is a limit on just how fast they can move. This natural limitation on the speed of electricity has huge implications for computers. Since the speed of electricity is limited, so is the speed of the electric components in computers. Eventually, computers will reach the point where they cannot get any faster. In other words, if computer speeds are related to time in years, then the y-values—computer speeds—are tending toward some maximum possible value as time goes on. On a graph, this y-value would be on the right side. In summary, it can be difficult or even impossible to tell if a particular situation involves asymptotes from a written description alone. However, there are some situations, like the ones above, where it is possible to identify natural limitations on the variables in play. For the situations like the ones above, you should be able to determine if there is the possibility of an asymptote. Lesson Summary In this lesson, you learned that you can identify asymptotes, or natural limitations, in at least two ways—by examining a graph and looking for y-values to tend to a specific value on the left or right side of a graph, and by analyzing a written description. Here is a list of the key concepts in this lesson: You can identify asymptotes either graphically or by looking for natural limitations on the variables in a scenario. Identifying horizontal asymptotes graphically means that the y-values tend towards a specific value either on the left or right sides of the graph. A graph can have no asymptotes, one asymptote, or two asymptotes. For real-world scenarios, to identify a possible asymptote means that one of the variables in the scenario has a natural limitation on it, such as height, budgets, or speeds.

Concavity in Action

In this section, you will see a few examples of how you can use concavity in general functions. General functions mean situations where you might not have the equation of a function or it is a type of function you may not be familiar with. The good news is that no matter what function you are presented with, you can use concavity to understand how the rates of change behave over time in a situation. First, a refresher on concavity. There are two types of concavity: concave up and concave down. The following image shows both types: To help you memorize what is concave up and what is concave down, look at these graphs of f(x)=x2 and g(x)=−x2. The graph of f(x)=x2, on top, is concave up, while the graph of g(x)=−x2, on the bottom, is concave down. Notice that for negative x-values, that is, those values to the left of the y-axis, the concave-up graph decreases more and more slowly; on the other hand, the concave-down graph increases more and more slowly. For positive x-values, though, to the right of the y-axis, the concave-up graph increases faster and faster while the concave-down graph decreases faster and faster. The following table summarizes this: ConcavitySituation 1Situation 2Concave UpThe function is increasing faster and faster.The function is decreasing more and more slowly.Concave DownThe function is increasing more and more slowly.The function is decreasing faster and faster. Consider the following real-world example related to driving a car: Seth leaves his house and drives through town, traveling on both highways and surface streets. His speed increases and decreases, depending on the conditions from moment to moment. When he turns onto an on-ramp for a stretch of highway, his speed increases smoothly. However, when he hits a traffic jam, he has to slow down and might even have to detour, which will add distance to his trip and cause a setback. When he sees a child's ball bounce out onto a surface street, he slams on the brakes. How does this work in practice? Think of Seth driving to his destination and keep track of his distance from home. With that in mind, consider the following graph of his distance from his house over time.

A Completely Valid Use of Models to Predict Hacker Attacks

In this section, you will see an example that leads to a valid conclusion from start to finish, at least after accounting for a possible outlier. Consider this example: Johan manages webservers for Progress Hospital. The following scatterplot shows the number of hacker attacks each month in 2017. Johan would like to predict the number of hacker attacks in 2018, so he ran an exponential regression. Review each step of SOMEV: For S, there is an adequate sample size here. For O, there is a possible outlier at x = 4. At this point, it would be correct to stop interpreting this regression model. No valid conclusions can be drawn from this model as it stands because of the possible outlier. As it turned out, the explanation for x = 4 was that there were major hacker attacks across the globe in April 2017. Thankfully, these did not happen again. Due to the unusual nature of April 2017's data, it is reasonable to remove that data point and run a new regression just like the following graph. Review the new model: For S, there is still an adequate sample size. For O, there are no possible outliers. For M, model strength is moderate since r2 = 0.59. The model choice, exponential, seems reasonable since the trend seems to be increasing. An argument could be made for a logistic regression, but it is appropriate to continue with the exponential model to see if it produces unrealistic results. For E, extrapolations would reach into 2018, which is reasonable; based on xmax=12+0.25×11=12+2.75=14.75, which allows a projection of up to 25% of the range for this moderate-strength model, this acceptable extrapolation reaches almost to March 2018 but does not quite reach March 2018. For V, validity should be fine as long as Johan restricts his projections to February 2018 at the furthest. An appropriate conclusion is that Johan can provide these projections for the 2018 year and feel confident about them: Month Function Values Interpretation January 2018 (x = 13) g(13)=4.52×e0.1×13≈16.6In January 2018, the web servers should expect around 16,600 hacker attacks if trends stay the same. February 2018 (x = 14) g(14)=4.52×e0.1×14≈18.3In February 2018, the web servers should expect around 18,300 hacker attacks if trends stay the same. Assuming the general trends stay the same, Johan's model would work for projecting hacker attacks on the web servers, extending to February 2018. If Johan wants to project further into 2018, he needs to gather more data and recalculate his regression model.

When Extrapolations Go Bad

In this section, you will see an example where you can use some of the extrapolations made by a model, but some extrapolations will be too extreme for you to use. Consider this example: In January 2010, 80 redback spiders were released into a very large forest to fight out-of-control crickets. Several biologists monitored the spider population, and they found that the population has been growing fast ever since the release. The following scatterplot shows the estimated spider population in the forest since January 2010 with an exponential regression function modeling the data. With more than 30 data points, this model seems fine in terms of sample size. In terms of outliers, there were no points too far from the general trend of the data, but the point at t = 23 was investigated just to be sure. That point was judged not to be an outlier by the biologists and kept in the data set. Given a strong correlation coefficient of 0.95, the model was seen as strong. However, a few of the biologists suggested that a logistic model should be used instead of an exponential model since there is a maximum number of spiders the forest resources can support. The biologists agreed to capture more data over time and reconsider the logistic model if data began to indicate that the spider population was truly stabilizing. The biologists decided that this function p(x)=93.75e0.09x was a good model for the situation. They then wanted to project the spider population some months into the future. The following table includes some of the values they found and their decisions: Month to be projected Projection Decision: Use Projection or not? May 15, 2010 (x = 5.5) p(5.5)=93.75e0.09(5.5)≈154 Use projection since the SOME aspects of the model check out, and this is an interpolation value. January 2014 (x = 48) p(48)=93.75e0.09(48)≈7049 Use projection since the SOME aspects of the model check out, and this is an extrapolation value within 50% of the range. December 2015 (x = 60) p(60)=93.75e0.09(60)≈20,757 Do not use projection since this is an extreme extrapolation value. More data is needed to see if a logistic trend appears this far out in the data. As you can see, while the SOME aspects of the model may check out for some of these projections, it is important to be careful not to extrapolate too far (for example, x= 60).

Using Technology to Solve Logistic Equations

In this section, you will use an applet to see an interactive way to solve logistic equations. You should be able to solve logistic equations from the graph alone, so use the applet here just to get a better understanding of the process of solving logistic equations. Maria manages servers for an online game company, Saga. A new game, Instinct Fighters, became more popular than the company had expected, and its web server has been working with full capacity for the last few hours. The function N(t)=51+100e−0.5t, shown on this graph, models the number of users on the server (in thousands) since the game went online at 0:00 a.m. yesterday. Since the maximum server population can be 5,000, Maria needs to keep an eye on when the server population reaches the maximum. For example, when it reaches 4,500 people, Maria has to be sure to monitor the server and ensure it is not going to crash. With this in mind, when does the model predict the server population will reach 4,500? To answer this question, look for a point on the function's graph where y = 4.5 in the following applet. Move the point along the function until the point's y-value is 4.5 or very close to it. You should locate (13.6, 4.5), which implies that the number of users reached 4,500 around 1:36 p.m. Recall that 0.6 hr=0.6 hr x 60 min/hr=36 min. This is why 13.6 corresponds to 1:36 pm. You can verify that N(13.6)=4.5 by substituting t = 13.6 into N(t)=51+100e−0.5t.

If a function is increasing more and more slowly, would that be concave up or concave down? Examine the following graph. For which sections is it decreasing more and more slowly?

Increasing more and more slowly is a concave-down situation The graph is decreasing more and more slowly from x = 1 to x = 1.5 and also from x = 3 to x = 3.5. In interval notation, these would be the intervals: [1, 1.5] and [3, 3.5]..

In the Golden Goddess scenario, from the 40th second to the 50th second from when the attack started, was the amount of memory occupied by the virus increasing faster and faster or increasing slower and slower? In the Golden Goddess scenario, from the 40th second to the 50th second from when the attack started, was the virus destroying memory at an increasing or decreasing rate of change?

Increasing slower and slower. Said another way, it was concave down in this section. Decreasing rate of change. Said another way, it was slowing down in this section.

Comparing Instantaneous Rates of Change

Instead of using average rates of change, what if Keith used instantaneous rates of change to analyze how ticket prices shifted at particular instances? One of the advantages to instantaneous rates of change is that they measure changes atsingular instances. This means you can use instantaneous rates of change to predict how small changes at one point in time might affect variables involved. For instance, in the following graph are the instantaneous rates of change when x = 30 (in solid red) and x = 40 (in blue dashes). From the lines pictured, it is clear that there is a steeper decrease when x = 30 (solid red) than when x = 40 (blue dashes). This means that Keith is losing more ticket sales if he increases the price from $30 than if he increases the price from $40. The specific instantaneous rates of change indicate exactly how many ticket sales Keith would be losing in each case. At the $30 price, Keith should expect to lose about 660 ticket sales for each dollar increase while he should expect to lose about 400 ticket sales for each dollar increase at the $40 price. It might be tempting to say that Keith should choose the $40 price point since he is losing fewer ticket sales at that price point, but remember that an instantaneous rate of change shows how things are changing at that particular moment. Keith is already expecting significantly fewer ticket sales overall at $40 than at $30, so he must keep this in mind as he considers the ticket sales data. Now consider this situation: Keith has been getting pressure from the fans to lower the price and the city wants Keith to pack the stadium. To meet these demands, Keith is considering two ticket prices: $30 or $31. Based on the instantaneous rate of change, would more fans be driven away by increasing the prices at the $30 price point or the $31 price point? Use the following applet to answer this question. To answer the proposed question, you first need to know the instantaneous rates of change and then interpret them. To first find the instantaneous rates of change, you should move the two sliders to $30 and $31. Once you do, you should see the instantaneous rates of change as listed in the table below. An interpretation of each instantaneous rate of change is also provided in the table. Note: The units on this instantaneous rate of change is "tickets sold per ticket price." It can be helpful to think of this unit as "tickets sold per ticket price increase." Ticket price Instantaneous Rate of Change Interpretation x = 30-0.66Ticket sales will go down by about 660 for an increase of $1 at this price point. x = 31-0.58Ticket sales will go down by about 580 for an increase of $1 at this price point. These instantaneous rates of change indicate that Keith actually loses fewer ticket sales for increases at the $31 price point compared to the $30 price point. Thus, it seems that increasing the price point at $30 lost more fans than the price increase at $31. According to the graph, is the instantaneous rate of change greater at x = 50, x = 100, x = 200, or x = 300? The instantaneous rate of change at x = 300 is the greatest at 6.09. This means that on the 300th day, approximately six more users started using the app.

Measuring ISP Customers Over Time

Internet service has changed and evolved since the internet's inception. It is only natural to look at how the internet has changed and to try to measure it. Think of it this way, just like your odometer tells you how fast your car is going, a rate of change with internet services can tell you how fast internet service is changing. Consider the following example: Pinnacle is an internet service provider (ISP). As broadband internet became more widespread, Pinnacle noticed its number of dial-up customers dropping off. The number of its dial-up customers, in thousands, can be modeled by the logistic function C(t)=121+0.23e0.3t, where 𝑡 is the number of years since 2000. The following is the function's graph. Pinnacle did not want to support both dial-up and broadband networks; that would be too expensive to support both in the long term. Therefore, Pinnacle wanted to make sure that the number of customers on its dial-up service was decreasing over the years. In fact, one of the company's goals in 2002 was to decrease the number of customers with dial-up to fewer than 2,000 people by 2010. Using the graph above, you know that Pinnacle had about 8,450 customers with dial-up in 2002. To reach their goal of fewer than 2,000 people with dial-up service by 2010, Pinnacle needed to have about 6,450 customers switch over to broadband. This means Pinnacle needed to have 6450÷8=806.25 dial-up customers per year switch over to broadband over the eight years from 2002 to 2010. This number, 806, is an average rate of change over the interval of time from 2002 to 2010. The units of this average rate of change is "customers per year," since the units of the dependent variable is "number of customers," while the independent variable is "years." How can you see if Pinnacle reached its goal? To determine that, you need to calculate the average rate of change for the number of customers from 2002 to 2010. You use the coordinates associated with these two years, A (2, 8.45) and B (10, 2.14), and perform this calculation: change in y−valuechange in x−value=2.14−8.4510−2≈−0.79 . Observe the visual representation of this average rate of change in the graph below. Notice that the slope of the line going through the points A and B is −0.79. This is no coincidence. The average rate of change based on two points is the same as the slope of the line passing through those two points. How do you interpret this average rate of change? First, remember the dependent variable is measured in thousands, so the units of this average rate of change would be "number of customers per year (measured in thousands)." The number 0.79 indicates that 790 customers per year canceled their dial-up service or switched to broadband in the years between 2002 and 2010. Did Pinnacle meet its company goal? Not quite. The goal was 806 customers, and the reality was 790, so it looks like Pinnacle needed about 16 more customers per year to switch to broadband. But give them credit: The numbers were not too far off. Here is another question: How would the average rate of change vary as you look at wider intervals of change? For example, how does the average rate of change from 2002 to 2012 look? Or 2002 to 2014? This will be helpful to track how the number of dial-up customers changes as you look further into the future. Use the following applet to see how the average rate of change varies as point B is pushed further into the future. What trends do you notice? Find the average rate of change from (1, 9.16) to (3, 7.66) and interpret this value. The average rate of change from t = 1 to t = 3 is −0.75. This means that about 750 customers per year dropped the dial-up service between 2001 and 2003. Find the average rate of change from (1, 9.16) to (2, 8.46) and interpret this value. The average rate of change from t = 1 to t = 2 is −0.7. This means that about 700 customers per year dropped the dial-up service between 2001 and 2002.

When Concavity Changes: Inflection Points

It can be a big deal when a function changes from concave up to concave down or vice-versa. The points at which this occurs are called inflection points. In this section, you will learn how to spot these points and what they mean in context. Consider this example: Johan has run into a graph he is not familiar with as he monitors CPU usage on the webservers he manages. Johan is testing how much CPU resource an application uses on a server. The next graph shows a function, C(t), which models the percentage of CPU usage by a software application, where t is the number of seconds since the application started running. With help from the graphs of y = x2 and y = -x2, you can identify which parts of the function's graph are concave up and which parts are concave down. The function's graph is concave up from point A to B because the shape is similar to that of y = x2. You can also think of the graph opening up between these points. The function is also increasing at an increasing pace, which is another factor indicating concave up. What does this mean? The curve between points A and B shows that the application's rate of change in using CPU resources was increasing faster and faster, meaning the program was using more and more resources. That would be a bad trend to continue. On the other hand, the function's graph is concave down from point B to point D because the shape is similar to that of y = -x2. You can also think of the graph as opening down between these points. Notice that between point B and point C, the function increases more and more slowly and then it decreases faster and faster between point C and point D. Both of these situations indicate concave down. It meant that CPU use was still increasing but slowing down in its increase to point C, and then finally CPU use was decreasing and going down quickly. This meant that CPU use was going back down, which is good news for Johan. If CPU use had continued to go up, it could mean a server crash. Here is another important thing to notice: The concavity switches direction at points B, D, F, and H. These points are called inflection points. You should be able to estimate the location of inflection points on a graph, and there will be more practice with this skill later. At point B, the concavity changed to concave down. For Johan, this meant that the program was still using more resources but at a slower and slower pace, as seen from point B to point C. After a while, the program actually started using fewer resources, decreasing the amount of CPU usage faster and faster, as seen from point C to point D.

Estimating Intervals for Interpolation and Extrapolation

It can be very important to identify when you are interpolating values and when you might be extrapolating values. Examine this example from A&B Car Sales. A&B Car Sales has been doing business since 1993, or t = 0. The original owners, Atmel and Bill, sold the business in 2010. When they sold the business, they made a good case that the business was growing, showing prospective buyers a model of growth and future potential. Unfortunately, some of the oldest records had been lost in a fire several years before, so Atmel and Bill had limited data to use for their model. The following graph displays the data for a few years of business; note that data is missing for the business's first five years, 1993 to 1998 , and also for years 8, 10, 13, and 16, those being years 2001, 2003, 2006, and 2009, respectively. [The graph shows a line that passes through about (4, 20) and (18, 250). Data points closely follow the line in a wave-like pattern. The area around the points is labeled Interpolation and the remaining area is labeled Extrapolation. ]© 2018 WGU, Powered by GeoGebra In the graph, you can see that in the interval for which there is data is from about t = 5 to t = 15. This means that for any regression model, the interpolation region would be the same, from about t = 5 to t = 15. On the other hand, all points outside of that interval, or before t = 5 or after t = 15, are extrapolation values.

A coefficient of determination's value is 0.16. What does that number imply for model fit? A coefficient of determination's value is 0.85. What does that number imply for model fit?

It implies a weak fit since $r^2=0.16$r2=0.16 is less than 0.3. It implies a strong fit since $r^2=0.85$r2=0.85 is greater than 0.7.

Given the graph of a real-world scenario and two x-values, identify which x-value has the greater rate of change.

It is no surprise that things are always changing in the world—you probably notice this every day. Because things in the world are changing so much, it is sometimes helpful to be able to identify the location of the greater rate of change. For example, you want to increase revenue as quickly as possible. In measuring that revenue, you need to be able to identify where greater rates of change occur to foster better changes more quickly. Looking at average and instantaneous rates of change at these times provides insight into answering such questions. In this lesson, you will learn how to compare average and instantaneous rates of change, using the steepness of lines on the graphs of functions.

Identifying Which Model Grows Faster in the Long Term

It is very helpful to compare two situations and identify the more favorable situation. Consider this next example with that in mind: Rona is a computer scientist at Google and she is writing a new algorithm (or program) to process very large digital pictures faster. The old algorithm has a run time that can be modeled with the function O(s)=5s2+3s+1, where s is the size of the picture file (measured in megabytes) and O measures the number of nanoseconds to run the program. (Note: 1 second = 1,000,000,000 nanoseconds.) The new algorithm that Rona has written has a run time that can be modeled with the function N(s)=4s2+5s+3. How can Rona find out if her new algorithm is better than the old algorithm? By looking at the graphs of these functions. The following graph depicts two functions, O(s) in red and N(s) in blue. The associated instantaneous rates of change are in similar colors for each function and represented by dotted lines. Notice how the new N(s) algorithm (in blue) does have a better overall run time compared to the old O(s) algorithm (in red). This is evident because N(s) seems to always have lower time values compared to O(s). This means that Rona's new algorithm is better than the old algorithm. The instantaneous rates of change indicate that the two functions start growing at about the same rate, which is 13 for both functions when s = 1. This means that if the file size of the picture is by 1 megabyte, the algorithm would need an additional 13 nanoseconds to run. But what if the file size was larger? When s = 5, it results in a very different scenario. The instantaneous rates of change for both algorithms is larger. See if you can find these rates of change in the following applet and interpret them in context. When s = 5, the picture file is 5 megabytes. In that case, you should have seen the two rates of change were 53 for the old algorithm and 45 for the new algorithm. These rates of change mean that the old algorithm would take about 53 additional nanoseconds for a file 1 megabyte bigger (or a 6 megabyte file) while the new algorithm would only take about an additional 45 nanoseconds for the same increase in file size (that is, for a 6 megabyte file). In fact, if you move the slider bar to various values, you will see that the instantaneous rate of change for the new algorithm is always lower than the instantaneous rate of change for the old algorithm. This is showing that the new algorithm consistently takes less time to run for larger and larger files.

Given two polynomial functions, identify which function will increase or decrease at a faster rate in the long term

Jennice is a programmer who works for different companies on a short-term, contract basis. On some of her jobs, she makes a lot of money per hour and on some, less. All this makes it hard to get a handle on her finances, so she uses a model to plan for the long-term future. Models are sometimes useful when they can predict future values or future situations, especially when it is possible to compare two models to see which model indicates better future growth. Polynomials are not necessarily the best at doing this, but it is important to know the limitations and usefulness of polynomials for this. You will learn why polynomials are limited in this lesson. Other functions will be better suited for this purpose, and you will encounter them later in this course.

Identifying Optimal Linear Regression

Jennifer works for a company that sells various kinds of pet products. She has been asked to look at the rising sales numbers for the past year for one of the company's dog food brands, Canine-ivore, and to predict how the dog food might sell in the coming year. The numbers look reasonable for the most part, but one month, sales were very low compared to the rest. How does that very low sales month affect the prediction Jennifer is preparing for the next year? One data point, if out of line with the rest of the data, can affect the conclusions you draw in a significant way. In this lesson, you will learn how to deal with situations like this.

Identifying Maxima and Minima in Graphs

Johan is a web server administrator for Progress Hospital. The number of hit requests every day follows a pattern and can be modeled by a function. Johan would like to know when the maximum traffic happens so he can stop running other applications on the server around that time. He must also run a 30-minute maintenance application when traffic is at a minimum. To understand these traffic patterns, Johan needs to study the graph of the web server traffic function and locate its maximum and minimum values. Look at the graph modeling Johan's situation. Johan finds that from 8:00 a.m. to 4:00 p.m. every day, the number of the web server's hit requests in thousands can be modeled by the polynomial function: f(x)=−0.1x4+1.65x3−8.55x2+14x+10, where x stands for the number of hours passed since 8:00 a.m. Locate the function's maximum and minimum in the following graph. Points A and C are maxima of this function, but what do these maxima represent? Look at the response variable. In this case, the "number of hit requests (in thousands)" is the response variable, meaning that points A and C are where the maximum number of hit requests occur in Johan's 8 a.m. to 4 p.m. day. When do these maxima occur in Johan's day? When you read a graph, you often need to estimate the value of a certain point. Focus on point A as an example. On the x-axis, the distance between 1 and 2 is cut into 5 grids, making each grid 1÷5=0.2 units. Point A's x-value is very close to 1.2, but not quite there. It is reasonable to estimate point A's x-value to be 1.19. How about the y-value of point A? It would be somewhere between 16 and 18. On the y-axis, the distance between 16 and 18 is divided into 5 grids, making each grid 2÷5=0.4. It is reasonable to estimate point A's y-value to be about 17.10. With this information, you can estimate point A's coordinates as (1.19, 17.10). Similarly, you can estimate point B's coordinates to be (4.22, 9.10) and point C's coordinates to be (6.90, 14.90). When you try to identify a function's maximum or minimum, keep in mind you are looking at the function's y-values. In the graph, you can see point A's y-value is larger than point C's y-value. Point A'sy-value is larger than that of any other point on the function. You can say the functionf(x) has a maximum of 17.10 atx = 1.19. Similarly, point B's y-value is smaller than that of any other point on f(x), so point B represents the minimum. You can say function f(x) has a minimum of 9.10 at x = 4.22. Here are two key things to keep in mind: When you try to locate a function's maximum, look for the highest point on the function's graph (the highest y-value). When you try to locate a function's minimum, look for the lowest point (the lowest y-value). How do you interpret the maximum and minimum in Johan's situation? Remember, Johan was interested in finding: The greatest number of hits between 8:00 a.m. and 4:00 p.m., so he can avoid running other tasks during that time. The least number of hits between 8:00 a.m. and 4:00 p.m. so he can run a maintenance application during that time. Interpret the maximum in context first, and then the minimum: Every day, at 1.19 hours from 8:00 a.m. (that is, at 9:11 a.m.), the web server gets 17,100 hits, the highest value of the day. This means that Johan should stop running other tasks sometime around or before 9:11 a.m. every day. Every day, 4.22 hours from 8:00 a.m. (that is, at 12:13 p.m.), the web server gets 9,100 hits, the lowest value of the day. This means that Johan should run the maintenance application sometime around or before 12:13 p.m. Since the maintenance application runs for 30 minutes, he should ideally schedule it to start 15 minutes before 12:13 p.m. (around 11:58 a.m.) every day. Johan is replacing Progress Hospital's servers with newer models that are more power-efficient. The cost of electricity for the server room since January can be modeled by the function in the following graph. The function's maximum is 560 at x = 1. This implies the maximum electricity bill so far this year was $560 in January. Identify and interpret the function's minimum. The function's minimum is 360 at x = 6. This implies the minimum electricity bill so far this year was $360 in June. Lesson Summary In this lesson you learned how to identify maxima and minima from a graph and interpret them in context. Here is a list of the key concepts in this lesson: Maxima and minima occur with respect to the response variable. That is, if a response variable is measuring "income" and you find a maximum, you have identified a maximum possible income for the situation. When you try to locate a function's maximum, look for the highest point on the function's graph (the highest y-value); similarly, when you try to locate a function's minimum, look for the lowest point (the lowest y-value). Estimate the x-value and y-value of a minimum or a maximum based on labels on the axes.

Introduction to Concavity

Johan manages web servers at Progress Hospital and is currently testing an application on a server. The following graph shows the application's usage of a web server's central processing unit (CPU), in percentage, from when the application starts up: As seconds pass, the application uses more and more CPU resource up to 7.5 seconds, when the usage decreases for a few seconds, and finally it stabilizes after 12 seconds. This tells you that the function's values are increasing slower and slower from 0 to 7.5 seconds and then decreasing faster and faster from 7.5 seconds to 12 seconds. Another way to describe this is to say that the section from t = 0 to t = 12 is concave down because the concave part faces down. You may also hear some people say that this curve "opens downward," which is another way to visually identify concave down. Why does concave down matter? Well, for Johan, it is very important. Concave down means the CPU usage is increasing slower and slower and then decreasing faster and faster. This would mean that CPU resources are being used up more slowly, which is good. On the other hand, concave up is important to Johan as it would tell him that the CPU usage is increasing faster and faster or decreasing slower and slower. In short, a concave-up graph would tell Johan that his servers could crash. However, the concave down in this situation tells Johan that the application may be using more resources for a while, but the drain on resources is going down after some time. This is good in terms of CPU usage, since it tells Johan that the application on the server likely will not use all the CPU resources. There are two types of concavity—concave up and concave down. To help you remember which one is which, examine the following graphs of f(x)=x2 and g(x)=−x2. The graph of f(x)=x2 is concave up, while the graph of g(x)=−x2 is concave down. Is the function in the graph concave up or concave down on the segment from x = 0 to x = 1? The function is concave down from x = 0 to x = 1. This is because when the line is followed from 0 to 1, the function is not increasing as as rapidly as x increases. Think of this as "the curve is downward-facing." Either way, this is a concave-down curve. Is the function concave up or concave down from x = 1 to x = 2? Use the following graph to answer this question. The function is concave up from x = 1 to x = 2. This is because when the line is followed from 1 to 2, the function is going down, but it starts to rise up. The resulting curve is upward facing.

Given the graph of an unknown function for a given real-world problem, translate the input and output pairs of the function into real-world meaning.

Johan manages web servers at Progress Hospital, where part of his job is to look at logs and watch for suspicious activities. He has been noticing some peculiar activity over the last few days—activity that does not quite fit into a linear, polynomial, exponential, or logistic pattern. In real life, not all data fits a linear, polynomial, exponential, or logistic pattern. In fact, there are many other types of functions beyond linear, polynomial, exponential, and logistic, and you will still need to be able to analyze any graph to make decisions based on data. The skills in this course will help you when you look at graphs of such unknown functions. This lesson looks at applying the skills you have learned to these mystery functions—or what will be referred to as general graphs.

Graphs of Inverse Functions Comtinued

Johan manages web servers for Progress Hospital. He is monitoring the number of users on the server and the server's memory usage in gigabytes (GB). The following is the graph of M(u), where M is the amount of memory in GB and u is the number of users in thousands. Next, Johan wants to study how much memory is needed to serve a certain number of web users, so he wants to graph the inverse function of M(u). Which of the following graphs is M(u)'s inverse function? Since the graphs of M(u) and its inverse function are reflections of each other by y = x, you can see f is M(u)'s inverse function. The following graph clearly shows this relationship: To further verify this, (2, 0.4) is on M(u), and (0.4, 2) is indeed on f.

Making Predictions Continued

Johan works for Progress Hospital's IT department. His team is preparing to upgrade an important application at each desktop computer. The following table shows the project's progress. Number of Days Computers Upgraded 3 96 51 60 Johan's team must complete all upgrades at the company's 702 desktops in 21 days. Johan wonders whether his team can meet the deadline, based on the team's progress in the first 5 days. Johan plotted the data into the following graph: Connecting point 𝐴(3, 96) and 𝐵(5, 160), Johan draw a line. The goal, 𝐶(21, 702) is located above the line's value at 𝑥 = 21. This implies that the team will not be able to complete the project by the deadline. On the graph, the line crosses the point (21, 672). The team can only complete upgrade at 672 computers in 21 days. The difference between the goal and the projected progress is 702−672=30 computers. The team will miss its goal by 30÷702≈0.04=4%. In other words, at the current pace, the team can complete 96% of the upgrades in 21 days. The use of percentages can give us a rough idea of how much work has been done and how much still needs to be done.

Comparing Instantaneous Rates of Change Continued

Just as you can compare two average rates of change by looking at the steepness of their lines on a graph, you can also compare two instantaneous rates of change. For example, consider a graph of hits on a new website, showing the number of hits per hour. Jason, the site's owner, wants to have instant chat available for customers with questions, and he needs to know if he will need to add staff at 4 p.m. or at 6 p.m. to take those calls. The instantaneous rates of change at these two moments can be compared to help him answer that question and are shown on the following graph. To compare the instantaneous rate of change at two points, you can draw tangent lines at those two points, and compare the slope of the corresponding tangent lines. A tangent line is a line touching the function's curve at a single point on the function and replicates, as best a straight line can, the direction of the curve at that point. In this graph, you can see the tangent lines of points A and B. Now compare the slopes of the tangent lines visually. The slope at 6 p.m. (x = 18) is steeper than the slope at 4 p.m. (x = 16). The instantaneous rate of change at 4 p.m. is 1,050 hits per hour and the instantaneous rate of change at 6 p.m. is 2,090 hits per hour. Since the slope for 6 p.m. is nearly double that for 4 p.m., Jason knows that he needs to add staff to handle help calls at 6 p.m. Even without seeing the computations of these two instantaneous rates of change, you could compare the slopes of tangent lines to see which of two instantaneous rates of change is faster. Whichever has the steeper slope has the faster rate of change. This change will be increasing for positive slopes or decreasing for negative slopes.

Using Rates of Change in Decision Making

Knowing how to calculate rates of change in linear functions can help you make good decisions. Consider this example: Ron wants to purchase a newspaper business and has narrowed his choices down to two potential companies. The profit function for Wellington Dispatch, as expressed in millions of dollars, is W(t)=3t-60, and the profit function for Porter City Morning News is P(t)=4t-60, where t is the number of years since 2000. Ron decides to choose the company with the higher rate of change in profit. Which company is that? To find out, he just needs to find the rate of change for both functions. Recall that for a linear function f(x)=mx+b, m is the rate of change, or slope. The rate of change for Wellington Dispatch is 3, and the rate of change for the Porter City Morning News is 4. Porter City Morning News is the optimal choice since it has a larger rate of change, thus a "steeper" line, or faster rate of growth, in profit. The following graph displays a comparison of profit functions for Wellington Dispatch and Porter City Morning News. In the graph, P(t) grows faster than W(t), because it has a larger slope. Even without the calculation, since P(t) is more slanted than W(t) , P(t) has a larger slope. Those two functions share the same y-intercept, (0, -60), implying both companies lost 60 thousand dollars in 2000. With a larger slope, P(t) will become positive (making money) sooner than W(t). Consider this next example: Two laptop companies are racing to decrease the weight of their laptops. The weight, in pounds, of the lightest laptop released by Proxatech Company and Alta-Comp Inc. can be modeled by P(t)=4−0.2t and A(t)=4.8−0.4t, respectively, where t is the number of years since 2000. Since the equations are not written in the format of f(x)=mx+b, it is a good habit to rewrite them into: P(t)=−0.2t+4 and A(t)=−0.4t+4.8. Examine their graphs: Without the need to calculate their slopes, you can see A(t) is more slanted than P(t). Although both lines are decreasing, A(t) decreases faster, thus it has a larger negative slope. By their equations, the slope of A(t) is −0.4lbyear, implying Alta-Comp's lightest laptops weighs 0.4 lb less every year. By comparison, the slope of P(t) is −0.2lbyear. Although Proxatech's lightest laptop was lighter than Alta-Comp's in 2000, Alta-Comp's technology improves faster, and will catch up with Proxatech in terms of laptop weight in a few years. Note that the slope of A(t) is less than the slope of P(t) since −0.4 < −0.2. However, A(t) will decrease more rapidly because its slope is steeper in the negative direction. Be careful of your wording when slopes are negative. Lesson Summary In this lesson you learned how to compare two lines' slopes by graph and by equation. This allows you to compare two linear situations and identify the most ideal situation in a given context. Here is a list of the key concepts from this lesson: If two lines have positive slopes, the one with a larger slope increases faster. If two lines have negative slopes, the one with a more negative slope decreases faster. If two lines' equations are given in the format of f(x)=mx+b, you can compare the value of m to decide which line increases or decreases faster or slower.

Good Fits

Knowing how well a regression model fits the data is important, so there are mathematical ways to evaluate fit that are more reliable than simply judging by the appearance of the regression function on a graph. The measure used to evaluate the fit of a model is called the coefficient of determination and is spoken of as r2, or an r2-value. The coefficient of determination, or r2, is a value that ranges between 0 and 1, inclusive. An r2-value of 1 implies that the data fit the regression function perfectly (that is, all data points are on the curve). An r2-value closer to 1 indicates a strong fit and an r2-value closer to 0 indicates a weak fit. Another way of thinking about the coefficient of determination is that it gives you an idea of how big a difference you can expect between the data points and the values predicted by the model. The coefficient of determination measures the fit of a model to a set of data, using these guidelines to judge: r2-ValueCharacterization 0.7≤r2≤1strong model / strong correlation 0.3≤r2<0.7moderate model / moderate correlation 0>r2>0.3weak model / weak correlation 0=r2no model / no correlation Always evaluate two other things before evaluating the r2-value of a model: Determine whether the proposed function is the best function to fit the data. Identify any possible outliers that impact the fit of the model to the data and address them as fully as possible. To see how this works in practice, look at a problem Maria is trying to solve. The online game Instinct Fighters was just launched. The following scatterplot provides data on the number of daily online gamers since January 1. Web-server manager Maria wants to analyze the data pattern and make predictions. A linear regression is used to model the data. Maria first notes that a linear function is appropriate here and that there are no outliers. She then interprets the coefficient of determination here, which is 0.99908, a value very close to 1, indicating strong model fit. Not surprisingly, all data points are close to the regression line. Since this linear function also seems to fit the data, this is likely a very good function to model this data. An argument for a logistic model could be made, but since the number of online gamers does not seem to be leveling off as the days go on, Maria does not have enough data to support a logistic model. She keeps the linear model. Though January was a promising month for Instinct Fighters, the numbers took a downward turn in February. More and more users stopped playing this game. The next scatterplot shows data on the number of daily online gamers since February 1. A polynomial regression curve is used to model the data. Maria first notes that a polynomial function is appropriate and that there are no outliers. Maria decides to proceed with the polynomial regression model and notes that all data points are close to the regression curve, which is why the coefficient of determination is very close to 1. On the graph, you can see that r2 = 0.96. This implies that the function is a good model for this set of data.

LSR and Curves of Best Fit

LSR and Curves of Best Fit The least-squares regression algorithm is used to determine functions of best fit. That means that by using the step-by-step process of the least-squares regression algorithm, the applet guarantees the function f(x)=2.4×e0.26xis the best exponential function of fit. Therefore, no other exponential function in existence would fit the data better (or have a better r2-value). What real-life situations are best modeled by exponential functions? You should always choose exponential functions to fit data in situations like: compound interest uninhibited growth (for example, the growth of single-cell organisms) radioactive decay (for example, carbon 14 or uranium) heating or cooling objects (for example, cooling of coffee or a hot iron, or the temperature of a cake in an oven as it bakes) To measure and predict a company's revenue, like Horizon's, there is no scientific directive about using an exponential or a quadratic regression. However, assume that Horizon's revenue grew exponentially over the past two decades. If its revenue slows down in the future, it might be proper to use quadratic regression, or even linear regression, to model the data. The point here is that whenyou are not sure what to do, listen to your data. A scatterplot can often tell you what function might be appropriate just based on its "shape." Lesson Summary In this lesson you learned the basics of data regression with exponentials. Each scatterplot has a curve of best fit, which has the highest coefficient of determination among all possible exponential curves to model a data set. With a function to model the data, you can predict values. Here is a list of the key concepts in this lesson: The least-square regression (LSR) algorithm is the most commonly used regression algorithm. The coefficient of determination r2 is a number that measures the strength of the fit between a regression function and the data. A set of established categories helps characterize the fit of a function to given data, from no correlation through weak, moderate, and finally to strong correlation. Exponential functions model situations with constant ratios of growth very well. What does a coefficient of determination measure with respect to modeling data? Coefficients of determination measure how well a function fits to data. The temperature of a cup of cooling coffee was measured every minute. To model the data, which type of function should you choose? The temperature of a cooling object fits an exponential function, as was explained in this lesson.

Input Values and Decisions

Leisha is considering setting up a business and she has to decide between two different plans of action: In Plan A, Leisha invests heavily in equipment but requires less labor, while in Plan B, Leisha starts up with less equipment but must employ more labor. By comparing the costs over time under each model, Leisha can make an informed decision about which strategy to follow.

Consider the annual rainfall in inches over the past six years for an island in the Atlantic Ocean: 0.5, 0.25, 0.46, 0.33, 0.42, and 0.3. What might be a reasonable prediction of rainfall for the next few years? The following graph displays data on the number of users, in thousands, for a video app since its launch in 2010. Which year represents the highest value that should be used to extrapolate from this data?

Less than 1 inch It is reasonable to believe from the data that any value within the range of 0 inches to 1 inch would be reasonable. The data range is 6-1=5 here, so 50% of the range would be 5×0.50=2.5years. This sets the extreme extrapolation mark at 6+2.5=8.5 years.

Given an exponential function and an input, calculate the corresponding output.

Lesson Introduction A very smart person once said, "Compound interest is the eighth wonder of the world. He who understands it, earns it. He who does not, pays it." This sage was talking about an application of exponential functions in the financial industry. In this lesson, you will learn what an exponential function is and how to calculate an output, given an input.

Given a scatterplot of real-world data, an exponential regression function for the data, and the associated coefficient of determination, interpret the regression function and the associated coefficient of determination in context.

Lesson Introduction Businesses are always forecasting the future. Managers make business decisions in the present based on what they think the future will be like. In today's world, they rely on "big data," meaning very large sets of raw data with the potential to be mined for insights on human behavior and trends. Up to this point, you have worked with exponential functions that have already been fitted to data. Now you will work with the process of finding best-fit functions based on data. This is called data regression. These skills are important because the need to analyze and interpret big data is becoming more and more common in both professional settings and in daily life. Not only will you need to dig into high-quality analyses, but you will also need to be able to spot a bad job of finding functions that fit data and ask the right questions when presented with some of these models. Keep in mind that software is usually used to find best-fit functions, but in this course, the focus is on interpreting and assessing functions. Best-Fit Curves and Future Revenues Businesses always want to forecast future revenues so they can plan things like expansion, raises, etc. To forecast revenues, businesses often use best-fit curves. Consider this example: Revenue is one key thing business managers try to forecast. Consider Horizon's revenues since 2000, displayed in the following table: Years Since 2000 Revenues in Billions of Dollars 0 2.8 1 3.1 2 3.9 3 5.3 4 6.9 5 8.5 6 10.7 7 14.8 8 19.2 9 24.5 10 34.2 11 48.1 12 61.1 13 74.5 How can you use the data in this table to predict the future? For example, what will Horizon's revenues likely be in the next few years? In real life, you rarely have a function available for use, but in business, it is pretty common. You need to use the skill of data regression by modeling the data with a curve of the best fit. Next is a graph showing all the ordered pairs in the table above. It is called a scatterplot, since the data points are initially "scattered" on the graph. o predict Horizon's revenues in the next few years, model the data with a function. Using historical data, you can see that Horizon's revenue increased in a very patterned way over the years. This particular pattern is an exponential pattern, where the y-values grow at a constant ratio. Since this scatterplot visually matches with what an exponential function does, you should use an exponential function to model it. Still, there are many choices for the function's exponential equation. You need to find a curve of best fit. The next graph shows three curves that try to fit the data. For curve 1, almost all data points are below it, implying the curve almost always overestimates the y-values. For curve 3, almost all data points are above it, implying the curve almost always underestimates the y-values. Compared to curve 1 and curve 3, curve 2 fits the data best, because it is the closest to the data points in the scatterplot. Keep in mind that curve 2 would still underestimate the y-values of 2011 and 2012 and overestimate those of 2005, 2006, and 2007. Still, these errors average out much better than the errors in curve 1 and curve 3. How Are Best-Fit Curves Found? How was the equation of the function for curve 2 found in the question above? There are a few different ways this can be done, but in this course, you will focus on the least-squares regression (LSR) algorithm to find functions of best fit. Exactly how this process works is beyond the scope of this course because it focuses on concepts and interpretation; however, please know that technology can be used to find the curve of best fit. Before going on, you should know about the number e. The numbere is a special number that can be used frequently with exponential functions; that means you will see it a lot in this unit. If that seems strange, just think of it as very similar to the number π. For circles, π is a special number that helps us work with areas and circumferences of circles. For exponentials, e is much the same and helps you more easily work with exponential functions. Do not treat e as a variable; it is a number, a constant, just like π. The next graph shows the curve of best fit for the data of Horizon's revenues and a measure of how well the curve fits the data. The function of the curve of best fit is f(x)=2.4×e0.26x, where e is a constant with the value of 2.718. However, taking a curve of best fit on faith alone can be dangerous in terms of data interpretation. How can you tell for sure how good a curve of best fit this is? The coefficient of determination, r2, on the graph gives an indication of how well this curve fits the data. A coefficient of determination is a measure of how well a function fits to the data. It can be as small as 0 or as large as 1; said another way, 0≤r2≤1. Values closer to 1 indicate a strong fit while values close to 0 indicate a weak fit. It is rare to see r2 = 1 as this would mean the function fits the data perfectly, and perfection is rare. An exponential curve of best fit is the exponential function that has the highest coefficient of determination of all possible exponential functions. In this example, r2=0.8281. The coefficient of determination gives you an idea of how big a difference you can expect between the real-world values and the values predicted by the model. Using the following table, you can see that this function is a strong model, meaning that the correlation between the function and the data points is strong. r2 Value Characterization 0.7≤r2≤1 strong model / strong correlation 0.3≤r2<0.7 moderate model / moderate correlation 0<r2<0.3 weak model / weak correlation r2=0 no model / no correlation Since the Horizon function is a strong model, you can use it to make some predictions. Remember that the data extended only to 2013; there was no data for Horizon's revenues in 2014. However, since this is a strong model, try using the following applet to predict Horizon's revenues in 2014. Drag point A so that the x-value is close to x=14. You should then see the point (14, 97.03) or one close to that. This point implies that Horizon's revenue was $97.03 billion in 2014. If you wanted to know when Horizon's revenues break $100 billion, find a coordinate on the graph where the y-value is 100. Sliding point A around again, you should be able to find the point (14.11, 100). This coordinate implies that Horizon's revenue likely reached $100 billion just into 2014, given the small decimal value. Based on the data and the graph above, predict Horizon's revenue in 2017. In function notation, f(17.03)=215.94. It implies that Horizon's revenue was approximately $216 billion in 2017. Based on the data and the graph above, when did Horizon's revenue reach $150 billion? In function notation, f(15.9)=150. It implies that Horizon's revenue reached $150 billion in 2016. Move the point until the function's y-value is 150, and its x-value is 15.9.

Given a real-world scenario modeled by a exponential function, translate a given rate of change of the exponential function into real-world meaning.

Lesson Introduction Change is a constant in business. Roberto's company, Mappit, which produces components for GPS automotive guidance systems, is currently experiencing an increasing value of five million dollars a month. Over the last six months, production of the GPS screen has increased by five units per day and the number of Mappit component orders has been rising by 19% weekly. Each of these rates of change represents a different type of change for the company. These types of changes are either instantaneous or average rates of change. Instantaneous and average rates of change mean something different and have different uses. In this lesson, you will glean real-world meaning from both average and instantaneous rates of change. Before looking at rates of change for exponential functions, take a second to refresh yourself on rates of change for linear and polynomial functions: Linear functions increase at the same rate forever. Because of this, linear functions always have the same average and instantaneous rate of change everywhere. The rate of change for linear functions is called the slope. If you have a linear function in the form f(x)=mx+b, then the slope, or the rate of change, is the value m. Polynomial functions typically have many "turns" so an nth degree polynomial can have as many as n-1 turns. All of these turns mean that polynomials commonly increase for a while, then decrease, then increase again, and so on. Since there is so much changing with polynomial functions, rates of change give a way of measuring exactly how things are changing. Changes over a period of values, like time, are well suited to give an average rate of change—an idea of how things change over a span of time, for example. On the other hand, changes at a particular instant are well suited to give an instantaneous rate of change. The method for calculating average rates of change or interpreting them really will not change for exponential functions. That is one reason rate of change is such a useful concept; it can be used with any type of function and gives valuable information about how things are changing. Also, you still will not need to know how to calculate instantaneous rates of change, but you do need to know how to interpret them. The GPS company, Mappit, which just launched a new website saw that there were 10,200 visitors to the site on the launch day. Each day after launch day, the number of visitors increased by 10%. That means that each day, the number of visitors grew by a factor of 1.1. This factor, 1.1, comes from the fact that the number of visitors each day is 110% of the previous day. The number of site visitors, y, after x days can be modeled by the equation y=10,200×1.1x. Then, the number of visitors on any given day can be found by substituting the day number for x. This table shows the number of visitors for the first four days after the website launched, found using the equation just given, y=10,200×1.1x. DayNumber of Visitors010,200111,220212,342313,576.2414,933.82 Before going any further, note that this situation is not linear. The data does not change in a straight line. A quick look at the table shows that fact. When increase is expressed as a percentage, the situation automatically becomes exponential. That is because a percentage increase is a constant ratio. With the number of visitors increasing so quickly, Mappit needs to know that its web servers can handle the visitors. The average rate of change relates the amount of change over a period of days, divided by the number of days. For example, from day 1 to day 3, the number of visitors to the site increased from 11,220 to 13,576.2. The average rate of change is calculated by dividing the difference in the number of visitors, which is 2,356.2, by the difference in the number of days, which is 2. The average rate of change from day 1 to day 3 is approximately 1,178 more visitors per day. You can also think of this as the slope formula, as: m=y2−y1x2−x1=13576.2−112203−1=2356.22=1178.1 Similarly, you can calculate the average rate of change from the second to fourth day (from x = 2 to x = 4). The average rate of change from x = 2 to x = 4 is approximately 1,296, as you can see: m=y2−y1x2−x1=14933.82−123423−1=2591.822=1295.91 This means that from the second to fourth day, the number of visitors to Mappit's site grew, on average, by 1,296 per day. Note that the actual number of visitors did not grow by 1,296 each day—they grew by a little less than this from day 2 to day 3 and then a little more than 1,296 from day 3 to day 4. That is why it is said that the number of visitors grew on average. Using the average rate of change allows you to work with non-linear growth as if the amount were changing by equal amounts for each increment of time. This idea of equal daily growth is useful in many ways. One example is when building the website's infrastructure. Why is this helpful? Think of the website servers handling all this traffic. If Mappit had not invested in a good server for the new website launch, it may need to invest more in the website's infrastructure since the number of visitors is increasing daily. If Mappit knows the rough capacity of its server, then the average rate of change gives the company a way to forecast how long until the web traffic outgrows the capacity of the server. Keep in mind that average rates of change become more helpful the longer you look at the data. Looking from day 2 to day 4 is very limited for an average rate of change. Having data from day 2 to day 20 would be much more helpful for an average rate of change, since this would average more changes over many more days. You have now seen an average rate of change. What about an instantaneous rate of change? The instantaneous rate of change tells you the increase in visitors per day for a given moment, as opposed to over a period of time. In the context of Mappit and its website, a moment means a certain time, like midnight on the third day. Knowing instantaneous growth is helpful in looking at a given day's change. Instantaneous rate of change does not rely on what is happening on any other day, it only relies on a particular instant. For example, when x = 2 (or at the start of day 2), the instantaneous rate of change is approximately 1,176. This means that, exactly as the second day starts, the number of site visitors is growing by 1,176 people per day. When x = 4, the instantaneous rate of change is approximately 1,423, which means that exactly as the fourth day starts, the number of site visitors is growing by 1,423 people per day. These changes are instant. The change described is not over a period of time but at an exact moment. This tells you several things. First, the number of visitors is growing. Second, this gives an immediate number to describe growth. Daily change is not affected by the previous day's change or the next day's. You also might have noticed that in just two days, the number of visitors coming to the website increased by over 300 people, so even the rate of change is growing faster by day 4. You may recognize this as a concave-up situation, but it is okay if you did not. Concavity of exponential functions is not a focus because exponentials are always either concave up or concave down. Since the concavity of exponentials never changes, it is not really useful to talk about their concavity. You should keep it in mind that exponential functions always fall into one of the following situations: Growing faster and faster indefinitely (concave up) Growing slower and slower indefinitely (concave down) Declining, or decaying, faster and faster indefinitely (concave down) Declining, or decaying, slower and slower indefinitely (concave up) How does the average rate of change make an exponential equation act linearly? An exponential equation changes by a different amount each time. Using the average rate of change, the amount that that exponential equation changes is averaged and is given a constant rate of change. Which choice is an average rate of change? 12 shipments per day from day 3 to day 9 The total number of shipments is changing by an average of 12 shipments per day over the 6-day period. Lesson Summary In this lesson, you used the fictional company Mappit to dig deeper into exponential functions and their average and instantaneous rates of change and to explore exactly what the calculations mean in real-world terms. Here is a list of the key concepts in this lesson: Like polynomial functions, exponential functions can be examined for both average and instantaneous rates of change. Average rate of change happens over a period of time and describes change in equal-sized pieces. You can find the average rate of change between two points by using the slope formula. Instantaneous rate of change is the rate of change for a given instant. This rate of change is different for every instant in an exponential function. The average rate of change for a given stock from x = 3 to x = 7 is −$0.595. What does this mean? It means that from hour 3 until hour 7, the value of the stock dropped an average of $0.60 per hour. The average rate of change means that this is the average amount of change for each hour. Which choice is an instantaneous rate of change that indicates loss? In a jump rope competition, 9 fewer jumps were made per minute after 30 minutes. This hits both targets, an instantaneous change and a loss.

Given a data set, a proposed function to model the data set, a conclusion, and the supporting calculation to the conclusion, interpret the calculation used to make the conclusion.

Lesson Introduction Clean Pro Janitor Services has 30 full-time janitors on its payroll. Some of these janitors are in their 20s and 30s; some long-term employees are in their late 50s and 60s and are approaching retirement. Clean Pro wondered how much funding would become available for salaries when some of the older workers retired, so the company did some research to see how age and salary were correlated for the janitors, if at all. In the last lesson, you learned that a conclusion based on a regression can only be trusted when it is based on an adequate sample size, when possible outliers have been explained or removed from the data set, when the model is strong and appropriate, and when any extrapolations are within reason. In this lesson, you will continue to learn about drawing valid conclusions for situations based on a model, including more on dealing with outliers, interpolations, and extrapolations.

Given two scatterplots of real-world data (one with outliers, one without outliers), the two associated exponential regression functions, and the associated coefficients of determination, identify the more appropriate regression function for the data.

Lesson Introduction Gloria works for the United Nations. Her team was asked to predict the world's population in the near future. You can imagine how many important policy and business decisions will be made based on these predictions. Gloria's team used exponential regression as the very first step in their prediction. The team debated whether to include the data that the world population reached one billion in 1804, but Gloria argued that that data should not be considered because it is an outlier. What is an outlier? In this lesson, you will learn the answer to that question, and you will also learn what to do when you spot an outlier to ensure that the conclusions you draw from the data is valid.

Given a real-world scenario and a corresponding polynomial function or its graph, interpret either the average or the instantaneous rate of change in context.

Lesson Introduction In 1964, a Ford Mustang sold for about $2,400. Cars usually lose value pretty quickly after they are driven off the lot, but a 1964 Mustang in pristine condition can be worth a lot of money to collectors today. The value of one of these Mustangs can be modeled by the polynomial function: V(t)=18t2−390t+2400, where t is the number of years since 1964. In this lesson, you will first work more on calculating and then interpreting average rates of change. As you will see, the average rate of change, calculated with the slope formula, is a ratio between the dependent (usually y) variable and the independent (usually x) variable. This can help you see how the value of a 1964 Mustang changes over a period of time. After that, you will then work on interpreting instantaneous rates of change in context. Instantaneous rates of change let you see how things are changing at a particular instant. How fast is the value of the 1964 Mustang appreciating exactly at the present moment? This question is asking for an instantaneous rate of change. You will learn what this term means, what its usefulness is, and how to find it at a "single point." A Polynomial Function and Sales Imagine your company tracked its sales per day (S) from day one. Your company could then use this data as a benchmark for opening a new branch or location. Suppose the sales per day (S) is modeled by the equation S(d)=0.002d2−0.5d+150, where d measures the days since your company first opened. Assuming this function will predict the sales at the new location, you can use this function to predict the average rate of change in sales per day (S) for the first 30 days in business. Similarly, you can predict the average rate of change in sales per day (S) for the first year in business as well. You just calculate the slope over each of these intervals: [0, 30] and [0, 365] respectively. Here are the two calculations: IntervalsCoordinates Average Rate of Change[0, 30](0, 150) since S(0)=150and (30, 136.8) since S(30)=136.8m=change in ychange in x=(136.8− 150)(30−0 )=−(13.2)(30)=−0.44[0, 365](0, 150) since S(0)=150and (365, 233.95) since S(365)=233.95m=change in ychange in x=(233.95)−(150)(365−0 )=(83.95)(365)=0.23 Notice that the average rates of change are different for the two intervals. That is because this equation is for a nonlinear function, so the average rate of change is likely going to vary over different intervals. If two average rates of change are ever the same for a nonlinear function, it is likely just by chance. What do these average rates of change tell you? The first one shows that, on average, sales per day (S) are going down by 44 cents each day for the first 30 days. That is not a huge hit, but it is alarming that sales per day would go down in the first 30 days. At the same time, this sometimes happens in businesses because the height of sales occurs on opening day. The second average rate of change is more promising. It shows that on average, sales per day (S) are going up 23 cents per day for the first year the store is open. Over the course of a year, this adds up. If sales per day went up by this amount, then 23 cents a day over the course of a year means that sales per day went up 0.23×365=$83.95 per day over the last year. That is, on the second day the store was open, sales were 23 cents more than they were on the first day, but by the time 365 days had gone by, the increase in sales per day had grown to $83.95 more than on the first day. This is shown in the slope formula; sales went up $83.95 as indicated by the numerator. As you can see, knowing average rates of change is helpful even in business contexts. They can help you plan for how businesses grow, which can then help you forecast how to manage a business. Stella sells drinks at her sandwich shop. Stella estimates her revenue from drinks alone, R, to be dependent on the price she sets for the drinks, x. Stella had a regression calculated based on some data and has the following regression function: R(x)=12x-x2. Find the average rate of change between the points (1, 11) and (4, 32). Correct! The average rate of change between the points is 7 since m=change in ychange in x=(11−32)(1−4)=(−21)(−3)=7. The height, in feet, of an arrow shot straight upward from the ground is given by h(t)=−16t2+48t. After 1 second (t = 1), the height is 32 feet. At 3 seconds (t = 3) the height is 0 feet. What is the average rate of change over the interval [1, 3]? The coordinates needed are (1, 32) and (3, 0). Plugging these coordinates into the slope formula will result in -16.

Given the graph of an exponential function for a real-world problem, translate the input and output pairs of the exponential function into real-world meaning.

Lesson Introduction In newspaper articles, you often read that a business is experiencing "exponential growth." However, exponential functions can model decrease as well as increase. When Microsoft was experiencing exponential growth from 1980 to 2000, Apple's business was decreasing exponentially; decrease is sometimes also called "decay." In this lesson, you will examine two types of exponential models, growth and decay, while getting more practice on putting input-output pairs in context. Exponential Growth and Decay You may remember some businesses that used to rent movies. They are not around any more due to how the market shifted with streaming movies. The businesses that used to rent movies are a great example of exponential decay. At the same time, the rise of businesses that focused on streaming movies is a great example of exponential growth. With that in mind, consider this example: In 2000, Best Movie Rental had 500,000 memberships, while Play It Again Films had only 3,000 memberships. However, more and more people switched from Best Movie Rental to Play It Again Films. The number of customers at those two companies can be modeled by the following exponential functions: B(x)=500,000×0.8x and P(x)=3,000×1.2x, where B(x) and P(x) represent the number of memberships at Best Movie Rental and Play It Again Films, respectively, and x stands for the number of years since 2000. The following graph compares those two functions. Best Movie Rental's graph started at (0, 500000), implying that the company had 500,000 memberships in the year 2000. The company has been losing memberships ever since. In the function B(x)=500,000×0.8x, the common ratio, 0.8, is smaller than 1. When a number is multiplied by 0.8, the product would become smaller, so it makes sense that B(x) is decreasing. Play It Again Films' graph started at (0, 3000), implying that the company had 3,000 memberships in the year 2000. The company has been gaining customers ever since. In the function P(x)=3,000×1.2x, the common ratio, 1.2, is greater than 1. That is why the function is increasing. In general, for an exponential function f(x)=Cax: The function increases or grows if a > 1; these function models are called exponential growth. The function decreases if 0 < a < 1; these function models are called exponential decay. Functions with a < 0 deal with complex numbers and are beyond the scope of this course. One more thing before you try some questions on your own. In the graph comparing Best Movie Rental's and Play It Again Films memberships, there was a point C (12.62, 29936.32). This was the point where the two functions crossed or intersected. What does this point C mean in context? The y-values are the same for both functions at this point (y = 29936.32), meaning that both companies had about 29,936 memberships at the time. What time was that? Well, x = 12.62 corresponds to the years since 2000, so it occurred a little more than halfway into the year 2012. To zero in on the month, calculate 12×0.62=7.44 (12 for the number of months in a year) to get more insight on which month it was. The value 7.44 indicates it was the seventh month (July), and it was just under halfway through July that this happened. If you want to dive even further into this value and estimate a day, use the fact that July has 31 days to calculate 31×0.44=13.64. This indicates that the two companies each had about 29,936 members around July 13, 2012. Keep in mind that this is also the day when Play It Again Films started beating its competitor, Best Movie Rental. Which of these statements is true about the function f(x)=2×0.5x? The function models exponential decay, and its initial value is 2. For an exponential function f(x)=C×ax, C is the initial value, and a is the common ratio. When 0 < a < 1, the function models exponential decay. Which of these statements is true about the function f(x)=0.5×2x The function models exponential growth, and its initial value is 0.5.For an exponential function f(x)=C×ax, C is the initial value, and a is the common ratio. When a > 1, the function models exponential growth.

Given a real-world scenario modeled by a polynomial function, interpret what concave up or down means in context.

Lesson Introduction Linear relationships are very common in real life. For example, Ira makes $10 an hour and works x hours per week, so his paycheck is f(x)=10x dollars. The graph of the function is a straight line, going up an equal amount for each hour Ira works. However, many functions in real life are not linear so their graphs are not straight lines. In this lesson, you will see why a function's graph might not be a straight line, whether it "curves up," which is called concave up, or "curves down," which is called concave down. You will also learn what that concavity means.

Given the graph of an exponential function, translate solutions to exponential equations into real-world meaning.

Lesson Introduction Martin works for an insurance business handling Medicare claims. He is analyzing Medicare spending in the United States, and he wants to predict Medicare spending based on data collected since 1975. In this lesson, you will help Martin with this task, which will involve finding solutions to exponential equations and translating them into real-world meaning. Growing Medicare Costs Healthcare costs have been on the rise for decades now. In fact, healthcare spending has been increasing faster and faster, matching an exponential pattern. Consider this example: As more Americans retire and healthcare costs continue to grow, it is not hard to believe that Medicare spending has been growing exponentially over past decades. Medicare spending, in billions of dollars, can be modeled by the function M(x)=28.22×1.08x, where x is the number of years since 1975. This graph shows an exponential function: Martin wants to know when Medicare spending will reach one trillion (1,000 billion) dollars if this trend continues. One way to do this would be to substitute y = 1000 into M(x), and solve for x in this equation: 1000=28.22×1.08x. However, to do this by hand would take some serious computational skills, and you do not focus on those skills in this course. Instead, you will estimate the solution based on the graph. On the graph, when the function's value is 1,000, its x-value is between 45 and 50. On the x-axis, the distance between 45 and 50 is divided into 5 grids, making each grid 5÷5=1 unit. It is reasonable to estimate M(46.3)≈1000, implying Medicare spending will reach 1 trillion dollars in early 2021, which is 46.3 years after 1975. You can check this result by substituting x = 46.3 into M(x), and you have: M(x)=28.22×1.08x M(46.3)=28.22×1.0846.3=28.22×35.2792⋯≈995.579 The result is close to 1,000, but it is not exact. You would expect to see some degree of error, because you are making estimations based on a graph. The following applet matches particular output values (y-values) to particular input values (x-values). From the applet, you should be able to see when the output is y = 1000, the associated input value is x = 46.36. This means the original guess of x = 46.3 was very close. Medicare spending, in billions of dollars, can be modeled by the function M(x)=28.22×1.08x, where x is the number of years since 1975. To find when Medicare spending will reach $2 trillion, which equation should you solve? 2000=28.22×1.08x You need to substitute the function's y-value by 2000, and solve for x. Medicare spending, in billions of dollars, can be modeled by the function M(x)=28.22×1.08x, where x is the number of years since 1975. Number of Websites Worldwide Using data from previous years, the function below was created and it models the number of websites worldwide, in millions, since the year 2000: w(x)=21.24×1.24x, where x is the number of years since 2000. As you drag point A on the graph for w(x) in the following applet's graph, notice the changing coordinates. If you drag point A to the far left, its coordinates (0, 21.24) imply that there were 21,240,000 websites worldwide in 2000. This also means that w(0)=21.24. In general, if you plug in a year (which is the independent variable), you can simplify the function and see what the corresponding number of websites was (the dependent variable). However, what if you needed to do the opposite? What if you knew the dependent variable but needed to know the independent variable? For example, the Federal Communications Commission (FCC) is a branch of the U.S. government that deals with digital communications, among other things. The FCC needs to be able to figure out how quickly internet traffic is growing as well as approximately how many websites exist. The FCC may need to know when there were 500 million (or half a billion) websites. To find this answer, substitute 500 into the function's y-value and solve for x, resulting in the following equation: 500=21.24×1.24x Instead of using computation to solve this equation, use the applet. Drag point A to a place where its y-value is very close to 500. You should get coordinates indicating that the number of websites worldwide reached half a billion in the second half of 2013. Use the applet to find the number of websites worldwide in the year 2010. In function notation, you have w(10)=214.22, implying that the number of worldwide websites reached 214,220,000 in 2010. On the function's graph, when x = 10, the corresponding y-value is 214.22. Lesson Summary In this lesson, you learned how to solve for x in the equation f(x)=y by graph and to put that solution in context. It is equivalent to locating the points (#, y) on a function's graph. Here is a list of the key concepts in this lesson: For an exponential function, you can estimate the associated input for a given output. You can solve exponential equations by using a graph. Most exponential equations have only one solution. Interpreting the solution to an exponential equation means remembering which function you found a solution to and then interpreting the corresponding values in terms of the independent and dependent variables. Account for the real-world context behind the variables for your interpretation of the solution.

Given a real-world scenario and a corresponding exponential function or its graph, interpret the average rate of change at two specified values in context.

Lesson Introduction Mike looks up at Long's Peak from the trailhead, noticing that parts of the footpath look very steep while other parts are relatively flat. The path does not rise at the same rate for the whole climb. Sometimes it is steep and other times, less steep. For any given spot, it is hard to describe exactly how steep the hill is. But if Mike picks a particular section of the climb, he can describe the steepness. In other words, he can describe the average rate change for that section. This is the magic of the average rate of change. It can take an irregular change and make it easier to explain or describe. When doing this, context is always important, so you have to know the units being measured. In Mike's case, he would probably use change in feet of altitude per mile. He could then say that from the trailhead to the 2-mile marker, the trail rose 13 feet per mile. In this lesson, you will learn first how to calculate an average rate of change using the slope formula. You will then learn how to interpret those average rates of change in context of a real-world situation.

Given two polynomial equations modeling a real-world situation, identify the equation or model that represents an ideal situation based on the real-world situation.

Lesson Introduction Short-term interest rates are usually lower than long-term interest rates. For example, if you buy a six-month certificate of deposit (CD) with $4,000, your bank might provide 1.4% interest. But if you put your $4,000 into a five-year CD, the interest rate might be 2.5%. Why is this? Short-term investments carry less risk, and therefore provide less reward, than longer-term investments. The longer the term, the longer the investor is tying his or her money up, which means more risk. The longer the time frame, the greater the chance that the economy could slow or other events occur to negatively affect financial markets. In this lesson, you will compare various situations, like short-term and long-term investments, and determine the optimal solution based on circumstances and context. You will see how average rates of change provide long-term information and instantaneous rates of change provide short-term information. You will also see how concavity can show how rates of change behave.

Given the graphs of two exponential functions, identify which function will increase or decrease at a faster rate in the long term.

Lesson Introduction The population of Jamesville has been growing by 2% each year. The population of Burlington has been growing by 3% each year. The people of Burlington are concerned that the town will grow too fast if this continues, putting pressure on infrastructure like roads, first responders, and schools. While this does not seem like a huge difference, over time, it could lead to big differences in the populations. Suppose, for example, the population of each city is currently 250,000. After just two years, Burlington would have 5,125 more residents than Jamesville and after just five years, the difference in populations would be almost 14,000. From a math standpoint, the citizens' concern is with the rates of change; if the two cities start off with the same population, Burlington will experience a larger average and instantaneous rate of change than Jamesville will. In this lesson, you will identify functions with faster rates of change, just like this comparison between Jamesville and Burlington, by examining exponential functions

Given a real-world scenario, a corresponding graph, and an average or instantaneous rate of change, interpret the rate of change in context.

Lesson Introduction There are many things that change over time, such as the value of a stock portfolio, the population of an endangered species, and the amount of free storage available on a computer network. These kinds of changes can be modeled, or represented, by functions. Average rates of change express how much a function changes over time, while instantaneous rates of change express how quickly or how slowly a function changes at a particular point in time. If you have completed other lessons in the course, you have seen how to use average and instantaneous rates of change for linear, polynomial, exponential, and logistic functions. Now you will learn how to apply average and instantaneous rates of change to any function. You will learn that the average rate of change can be determined by computing the slope of the line, you will learn what distinguishes an increasing average rate of change from a decreasing average rate of change, and you will learn the limits of usefulness for an instantaneous rate of change.

Interpolations and Extrapolations summary

Lesson Summary In this lesson, you learned about ranges, interpolations, and extrapolations. You explored several scenarios to see how these concepts interact in the real world. Here is a list of the key concepts in this lesson: Predictions are based on data and facts. A prediction based on values for which there is data is called an interpolation. With a strong or moderate model (0.3≤r2≤1) the model can be used for any interpolation value. A prediction based on values for which there is no data is called an extrapolation. An extrapolation is a prediction on how the variables would interact in another time or situation assuming no drastic changes in how the variables behave. A range is the distance between the x-value of the smallest data point and the x-value of the largest data point. Thus, range=xmax−xmin. For a more accurate extrapolation with a moderate or strong model, you can go as far down and as far up as 25% of the range. For a risky extrapolation with a strong model only, you can go as far down and as far up as 50% of the range.

Summary .../..

Lesson Summary In this lesson, you learned how outliers affect a regression's coefficient of determination. A low coefficient of determination makes predicted values untrustworthy. Ideally, you look for a coefficient of determination close to 1. Here is a list of the key concepts in this lesson: Before evaluating the r2-value, determine if the chosen regression function is appropriate, if all possible outliers have been investigated and explained, and if true outliers have been removed from the data set. r2-values are interpreted on a scale from no model/no correlation to strong model/strong correlation. Possible outliers always decrease the r2-value. Even when possible outliers are retained in the data set, rather high r2-values are possible.

Summary 1

Lesson Summary In this lesson, you learned that a logistic function is one with a starting point and a natural limit. You also calculated several logistic functions, including situations involving power recovery after a hurricane, a company's market growth, and a computer's speed at different levels of memory usage. Here is a list of the key concepts in this lesson: When data grows fast at first, then slows down and finally approaches a limit, a logistic function should be used to model the data. Any situation that has natural lower and upper limits is likely modeled well by a logistic function. A logistic function is in the form of f(x)=L1+C⋅e−kx+m, where L + m is the function's maximum value, or upper limit, while m is the function's minimum value, or lower limit. When evaluating a logistic function, follow the order of operations, noting any grouping symbols.

Summary ...

Lesson Summary In this lesson, you learned that both average and instantaneous rates of change can be compared by examining their graphs. Here is a list of the key concepts in this lesson: Both average rates of change and instantaneous rates of change can be compared by the steepness of two lines. The steeper line indicates a greater rate of change.

Summary...../.

Lesson Summary In this lesson, you learned that it is very important to understand the basic characteristics of linear, polynomial, exponential, and logistic functions, as well as to know when to use which one based on the shape and characteristics of data. Here is a list of the key concepts in this lesson: Linear functions are good for modeling data that lies in a straight line or data that steadily increases or decreases by a set amount. Polynomial functions are good for modeling data with turns. Polynomials are also good for modeling many distance, velocity, and acceleration problems. In particular, quadratics are good for these problems. Exponential functions are good for modeling data that increases or decreases by a constant ratio. Likewise, exponentials are good for data with one limiting factor, which by definition means one asymptote. Exponentials are generally good at modeling investments and technology advancements with one limiting factor. Logistic functions are good for modeling data that has two limiting factors, meaning two asymptotes. Most population models are logistic functions since it is impossible to have fewer than zero members of a population and there are always upper limits, due to resource restrictions, on how large a population can grow.

Outliers, Extrapolation, Interpolation Summary

Lesson Summary In this lesson, you learned that outliers must be evaluated and addressed before proceeding with analyzing or using a model. You also reviewed procedures for appropriate extrapolation to produce values that are reliable. Here is a list of the key concepts in this lesson: Possible outliers and outliers always affect a regression model's equation, graph, extrapolation values, and interpolation values. Before performing an extrapolation or interpolation, make sure possible outliers are either explained and kept in the data or removed if they are true outliers. Never interpret the r2-value before attending to possible outliers. If a regression model is strong and all possible outliers are attended to, it is possible to extrapolate as high as 50% of the range on the upper and lower sides. If a regression model is moderate and all possible outliers are attended to, it is possible to extrapolate as high as 25% of the range on the upper and lower sides.

SOME summary

Lesson Summary In this lesson, you learned that there are four tools to use to check for sources of error in a regression model; these tools are easy to remember as they form the acronym SOME. Here is a list of the key concepts in this lesson: Every regression model should be checked for these four potential sources of error: S for sample size; a reliable model needs about 10 or more data points. O for outlier; all possible outliers must either be explained and kept in the data set or removed from the data set. M for model strength; a model is of either strong or moderate strength, as measured by r2, and the model uses the most appropriate function for the data and situation. E for extrapolations; extrapolations may go out only as far as the model strength indicates. For a moderate-strength model, 25% of the range is acceptable, while for a strong model, 50% of the range is acceptable. Remember that a regression professional can go beyond these limitations, but you are not expected to do that yourself.

Summary .0...

Lesson Summary In this lesson, you learned that too few data points can lead to unpredictable results. Also, nothing measurable is limitless. All regression functions work only within certain limitations of its variables. Stretching the independent variable (that is, the x-values) too far produces results which make no sense. Here is a list of the key concepts in this lesson: Given a data set and a proposed function to model the data, it is vital to evaluate any real-world constraints that impact the model. When constraints are identified, the model can only be used reliably within defined limitations.

Lesson /..

Lesson Summary In this lesson, you practiced working with horizontal asymptotes for functions derived from several general real-world situations including a customer-satisfaction scenario, an over-used server problem, and a patient with high blood pressure. Here is a list of the key concepts in this lesson: If a function approaches a certain value when its x-value becomes very small or very large, the function has a horizontal asymptote. In real life, having limiting factors is a trademark characteristic of horizontal asymptotes. A function can cross its own horizontal asymptote(s) before its x-value becomes very small or very large.

Last Summary

Lesson Summary In this lesson, you reviewed the first four steps, SOME, that should be checked whenever you evaluate a regression. Then a final, fifth step, V, was added to complete the acronym SOMEV. If any of these steps fail, any conclusions drawn from the regression should not be trusted. Here is a list of the key concepts in this lesson: The first step for evaluating a regression model is S for sample size. The second step for evaluating a regression model is O for outliers. The third step for evaluating a regression model is M for model strength and model choice. The fourth step for evaluating a regression model is E for extrapolations. The fifth and last step for evaluating a regression model is V for validity of conclusions.

Lesson Summary .,....

Lesson Summary In this lesson, you saw how an outlier can affect a regression function's graph. In some cases, an outlier pulls the whole graph toward it. In other cases, a function's graph works like a lever system where the outlier and part of the graph go in opposite directions. Here is a list of the key concepts in this lesson: If an outlier is caused by an unusual condition outside the norm, it should be removed. For all regressions, outliers can change the equation and graph of the regression function, and this can degrade any predictions that rely on the regression model. An outlier generally pulls the regression function toward itself, the outlier and this can cause over- or under-estimates for future values. Outliers for logistic regressions can cause unexpected changes, so be especially careful with outliers for these models.

calculation to the conclusion, interpret the calculation used to make the conclusion. summary

Lesson Summary In this lesson, you saw three different models and answered three different questions related to these models. You learned that there is usually some numerical aspect of a given model to focus on to answer a question. For example, you used the correlation coefficient to conclude that age and salary are related, you used extrapolation values to predict future budget needs for Randall Computers, and you used interpolation and extrapolation values for predicting the number of redback spiders in a forest. In all of these, you used the SOME aspects of the models to determine if the models were reasonable to use to answer these questions. Here is a list of the key concepts in this lesson: In general, a model must satisfy the SOME aspects to determine if it can legitimately be used to answer questions. In some instances, possible outliers can be ignored. This is true when removing possible outliers only improves the model's already strong fit. Interpolation and extrapolation values are always affected by possible outliers, so do not attempt interpolation or extrapolation while possible outliers remain unexplained.

Identifying Slope and the Y-Intercept

Linear functions also help model everyday scenarios in the IT world. Consider this next example: Laptop computers are more portable than desktop computers, which makes them a must for IT workers like Sam who often works on the move, away from his desk. The percent of battery power, P(h), remaining h hours after Sam turns on his laptop computer is P(h)=−20h+100. Can you identify the slope and y-intercept and interpret what they mean in this context? Recall that the y-intercept is b in f(x)=mx+b. In P(h)=−20h+100, the y-intercept is 100, which means the battery power is 100% when Sam first turns on the laptop. The slope is m in f(x)=mx+b. In P(h)=−20h+100, the slope is −20, which means the battery power decreases by 20% every hour. The number of financial applications at Macintosh Store grows by a certain fixed amount every year according to the following function: M(t)=2,200t+5,000, where t is the number of years since 2000. Which statement is correct? The function's y-intercept is 5,000. It implies that there were 5,000 financial applications at Macintosh Store in 2000. Correct! In f(x)=mx+b, the y-intercept is b. The number of financial applications at Macintosh Store grows by a certain fixed amount every year according to the following function: M(t)=2,200t+5,000, where t is the number of years since 2000. Which statement is correct? The function's slope is 2,200. It implies that the number of financial applications increases by 2,200 per year.The slope is m in f(x)=mx+b. Lesson Summary Think back to the introduction to this lesson, where you were embarking on a weight-loss program. You used the function W(t)=−1.5t+240 to figure out your expected weight for any given week during the diet. The key things you needed to know were the function's slope and its y-intercept, and as you now know, this same process helps solve many other problems as well. Here is a list of the key concepts from this lesson: A linear function is in the format of f(x)=mx+b, where m is the slope and b is the y-intercept. A linear function's slope shows the rate of change in the function's value. A linear function's y-intercept shows the function's starting value, or, the function's value when the input is 0.

Rates of Change and Y-Intercepts

Linear functions are useful any time there is a constant rate of change. In a linear function, m is the slope and represents your rate of change, and b is the y-intercept, which remains constant. This formula can help model scenarios in the real world. Consider this example: Your friend Rich approaches you with an opportunity to invest in his new food delivery service. The company needs $500 to get started, and then it will have a monthly gas expense of $100. You are interested but cautious. To calculate the cumulative cost of your investment in dollars, use this function: C(t)=100t+500, where t represents the number of months. Notice that in this example, the variables are C(t) and t instead of x and y but the input and output concept is the same. By substituting a given value for t (months in business), you can calculate cost, C(t), by that month. What is the company's cost at the very beginning, when the number of months in business is t = 0? C(0)=100(0)+500=0+500=500 The value $500 is the function's y-intercept (the value of the function when t = 0), which is the start-up cost before the first month of business. A function's y-intercept is simply the function's value when the input variable is 0. Notice that 500 is the y-intercept of C(t)=100t+500. In general, in a linear function f(x)=mx+b, the value of b is the y-intercept. This is a shortcut way to find the y-intercept. The company's cumulative cost after one month is: C(1)=100(1)+500=600. After two months, the cost is: C(2)=100(2)+500=700 Each month, the company's cumulative cost increases by $100. This value is the rate of change, or slope of a line. You can identify the slope of a line in C(t)=100t+500 as the number in front of the independent variable, t. A slope's unit is always in the format of a rate, such as dollars per month, miles per hour, units per minute, etc. It is important to know what units you are working with when using a slope. Recall, in a linear function f(x)=mx+b, m is the slope, and b is the y-intercept. Here are some special cases: The slope of f(x)=x+3 is 1 because x can be considered as 1×x . The slope of g(x)=−x+3 is −1 because −x can be considered as −1×x. The y-intercept of h(x)=2x is 0 because the function could be written as h(x)=2x+0. For p(x)=1−2x, the slope is −2 and the y-intercept is 1, because the function could be written as p(x)=−2x+1. An action camera company has fixed costs of $9,000 per month and material cost of $500 to produce each camera. The function modeling cost per month is: C(x)=9,000+500x What is the function's y-intercept? What does it mean? The function can be written as C(x)=500x+9,000, which matches the format of f(x)=mx+b, where b is the y-intercept.. The y-intercept is 9,000. It implies the company has fixed cost of $9,000 per month, before producing any cameras. An action camera company has fixed costs of $9,000 per month and material cost of $500 to produce each camera. The function modeling cost per month is: C(x)=9,000+500x What is the function's slope? What does it mean? The slope is 500. It implies the cost of producing each camera is $500.In f(x)=mx+b, m is the slope.

Identifying Rate of Change

Linear functions help us graphically model scenarios in the real world. Consider this example: Sarah is on a team of IT specialists who maintain desktop PCs at ITT Chips. Due to technology development, she can maintain more and more PCs each year. In 2004, she was responsible for 75 PCs; by 2009, she was responsible for 115 PCs. If you use the function P(t) to model the number of PCs Sarah maintains, where t is the number of years since 2000, the points (4, 75) and (9, 115) would be on this function. You can connect the points and sketch the function's graph: On the graph, from the point A (4, 75) to B (9, 115), you can draw a slope-triangle, which is a right triangle with points A and B as two vertices. The right triangle's height is called the rise, which is 115−75=40 units. This implies that from 2004 to 2009, Sarah was in charge of 40 more PCs. The triangle's base is called the run, which is 9−4=5 units. This implies it took 5 years for Sarah to be in charge of 40 more PCs. What is the average rate of change over those 5 years? You can divide to find out:405=8. The result implies that on average, Sarah's workload increased by 8 PCs per year. This rate of change is called the linear function's slope. To find the slope by using a graph, identify two points on the graph, draw a slope triangle, and then calculate the slope by dividing rise over run. On a linear function, you can pick any two points to calculate the line's slope, and you should get the same value. In this section's scenario, notice that point C (0, 43) is also on the graph. Use C (0, 43) and A (9, 115) to calculate the rise, run, and slope of the line. rise = 72 run = 9 slope = 8 In this section's scenario, notice that point C (0, 43) is also on the graph. Use C (0, 43) and A (4, 75) to calculate the rise, run, and slope of the line. rise = 32 run = 4 slope = 8

Which type of functions should you use to model the population of rabbits on an island? Which situation should use a logistic function to model it?

Logistic function: The population of animals cannot grow forever on an island with limited resources. A fast-growing company's market share in percentage.A company's market share has an upper boundary of 100%.

Online Gamers Continued

Look at online gamer data Sarah collected from her web servers. The following scatterplot has had its outliers removed: In general, this data grows in a strong linear fashion. There are some ups and downs, but the number of gamers is steadily increasing. A steady increase or a steady decrease is the hallmark of a linear function. As such, you could run a linear data regression, using specialized software, which would give you the following graph: The coefficient of determination is r2 = 0.9313, which indicates a strong fit to the data. You might be wondering if Sarah should stop there. Since the linear model is strong, should she bother trying other regressions on this data? In the real world, perhaps not. But maybe Sarah really wants to see how other regressions work with this data. Read on to see how the other regression analyses turned out. The following graph depicts the result of a third-degree polynomial regression. The next graph depicts the result of an exponential regression. The next graph is the result of a logistic regression.

Comparing Two Logistic Functions

Maria works for the IT department of a company named Saga, which plans to hire 500 new employees to accommodate business growth. Maria's team is in charge of installing new personal computers (PCs) for all those new hires. Two companies ended up at the top of the bid process. The number of installed PCs by Fast PCs and by Express PCs can be modeled by F(t) and E(t) on the graph, where t is the number of days since the installation project starts. The following graph shows both functions: Look carefully at the formulas of those two functions: F(t)=5001+100e−0.3t, E(t)=5001+1,500e−0.3t, Recall that a logistic function is in the form of f(x)=L1+Ce−kt+m, where L + m is the maximum, m is the minimum, and k-value determines the rate of change in the function's middle segment. The maximum of both functions is 500, implying a total of 500 PCs will be installed. The minimum of both functions is 0, implying 0 is the fewest number of PCs to be installed (if the company does not take the job or is just starting the job). The k-value of both functions is 0.3, implying both functions grow at the same rate in the middle segment. The only difference is the value of C. On the graph, you can see that the smaller the C-value, the earlier the function starts to grow in the middle segment. Which company should Maria choose? It depends. In Fast PCs' bid, PCs are installed first and then tested. In Express PCs' bid, more testing and planning would be done before PCs are installed. Consider two possible scenarios here: If the new employees have been hired and are waiting to use their PCs, Maria would probably choose Fast PCs. However, if the new employees will not be hired or need to use their PCs until 30 or 40 days later, Maria would probably choose Express PCs because more testing in the early stages could eliminate potential issues and avoid possible mistakes during the installation. In fact, the plan is to hire new employees in groups, 100 at a time, every 15 days. Now Maria needs to know when 100 PCs would be installed under both plans. She could solve these equations: 100=5001+100e−0.3t, 100=5001+1,500e−0.3t, Or instead of solving those equations, Maria could estimate the solutions by graph. She notices F(10.7)=100 and E(19.7)=100, implying Fast PCs would have 100 PCs installed by the end of the 10th day, and Express PCs by the end of the 20th day. If Maria must have 100 PCs installed by the end of the 15th day, she has to choose Fast PCs. Do you see the importance of the context of Maria's problem? Lesson Summary In this lesson, you made choices based on logistic scenarios, such as installing computers and selecting a promotional plan. You also learned the importance of considering the exact context of a given problem. Here is a list of the key concepts in this lesson: In f(x)=L1+Ce−kx, the C-value determines how quickly the logistic function starts to grow. The smaller the C-value, the more quickly the function starts to grow. When looking at two logistic functions, you must choose a function that meets your current needs. The value of L, C, and k should be part of your consideration, and the context is just as important. When in doubt, compare the graphs of the functions to the context of the situation.

Tables as Inverse Function

Micah is figuring out how many Mbps of internet speed she needs to maximize the number of technical reports her employees complete each day. But what if she turns the question around and sets a specific goal for the number of reports produced daily? Then Micah needs to determine the best internet package to purchase, based on that goal. Micah can use the same table, but this time she uses the number of reports completed per day to determine the internet speed needed—the inverse of her first question: Internet Speed (in Mbps)Reports Completed per Day234669 Micah sets her goal for each employee at 6 reports per day. What internet speed does she need to purchase? The row with 4 Mbps shows that this speed produces 6 reports completed per employee per day, so the answer is 4 Mbps. You could also examine the following graph and trace the y-axis to 6 reports. Then you could see that 6 reports correlates with x at 4 Mbps. John's company needs to hire someone that can type 8 papers a day, but he doesn't want to hire an over-skilled person who will blaze through the work and then sit around, waiting for something else to do. A friend, Cara, shared the following table she created based on data from her own, very similar, company in another state: Typing Speed (in words per minute)Number of Papers Completed2545087512 How many words per minute would the new employee need to type? John would need to hire someone that can type 50 wpm.

Tables as Functions

Micah is shopping for new internet plans for her business, which produces technical reports for small-business clients. She's been tracking how the number of reports her employees can complete, on average, based on the speed of their internet connections, expressed in megabits per second (Mbps). Examine the following portion of her data: Internet Speed (in Mbps)Reports Completed per Day234669 Given that Micah is shopping for the best speed to increase production rate, what do you think her independent and dependent variables are? The independent variable would be internet speed, because that is the factor that "explains" the differences in production; and the dependent variable would be the number of reports completed per day. With this information, Micah can make the best decision on internet speed needed to reach her ideal production rate. However, she wants to take it further. Micah graphs the data points in the table so she can make predictions on internet speeds she might want to purchase in the future. Use the following graph to see if you can find the answer to the question: If Micah increased her company's internet speed to 8 Mbps, how many reports could she expect the employees to complete? Using the graph, Micah can estimate that, at 8 Mbps, her employees could produce about 12 reports per day, on average. Examine the table depicting a function. Typing Speed (in words per minute)Number of Papers Completed2545087512 When drawing conclusions for this function, which of the choices below would be an accurate statement? The typing speed determines the number of papers completed. When looking at the table as a function, the typing speed determines the number of papers completed. Someone who types 25 words per minute (wpm) can type only 4 papers during the same time that a person who types 75 wpm can type 12 papers.

Given the graph of a logistic function for a real-world problem, translate the input and output pairs of the logistic function into real-world meaning.

Modeling real-world phenomena with logistic functions does not mean much if you cannot then interpret what the function is telling you about a situation. This lesson will focus on interpreting input-output pairs for logistic functions. This means you will estimate input-output pairs via a graph and start comparing situations modeled by two different logistic functions to identify an optimal situation.

All of these models have pretty good correlation coefficients, so which type of regression should Sarah choose? She starts by looking at the r2-values to see if one stands out as the strongest fit from that perspective.

Modelr2-valueLinear r2 = 0.9313Polynomial (3rd degree) r2 = 0.9400Exponential r2 = 0.9385Logistic r2 = 0.9403 Not much help here; all the values are close. The linear model has the smallest r2-value, but this alone does not mean it is the worst model here. Sarah decides to collect more data to see if there is an upper capacity for this variable (number of gamers) that could be predicted with the data. While that is being collected, Sarah looks more closely at her options for the best data regression for this data. An additional principle for Sarah to consider is the idea of parsimony: Use the simplest model for a given context. Thus, when the coefficient of determination remains about the same for a set of models, it is best to choose the simplest model. Here the linear model is the simplest, so, using the principle of parsimony, she would choose the linear model. First, keep in mind that the purpose of a data regression is to make an educated guess about how things work in situations where you have no actual data, typically in the past or in the future. Therefore, all regression models should be judged on how well they can help you see patterns from the past or predict the future. By and large, polynomial regressions do not do a good job of looking too far into the past or into the future. That is because polynomials always go to infinity or negative infinity as the x-values get larger (positive) or smaller (negative). There are very few real-world scenarios where numbers can consistently get larger and larger. That means polynomial regressions will not help Sarah out as much as the other models. She eliminates a polynomial regression. settings What about an exponential regression as a solution to Sarah's quandary? Exponential regressions suffer from one of the same problems as polynomial regressions—they tend to go to infinity or negative infinity as they look to the past or future. In some cases, such as models of radioactive decay, this is valuable. However, when modeling things like populations or revenue projections, it is best to avoid infinity because in context, infinity is not realistic. Now Sarah considers logistic models. Logistic models are useful in the sense that they can reveal maximums in the past and in the future. However, this same aspect can be too limiting in some situations. Would revenue have a predictable maximum as time goes on? Probably not. She dismisses this option.

Each year, a city increases its spending to help homeless people. The following linear regression models the number of homeless people in the city's downtown area since 2000. Which statement about this regression is true? Jessica collected some rent data in a local neighborhood. She is trying to determine a reasonable rent for a 2,000-square-foot house. She created the following scatterplot and ran a linear regression.

Most likely, the real data values will level out and approach a limit after 2012. A new regression model will then be needed. It is unlikely the number of homeless people will drop to 0 in 2023, no matter how much money the city spends. The function should only be used from about x = 500 to x = 2000. The area of a rental should be between 500 and 2000 square feet.

Given two exponential equations modeling a real-world situation, identify the equation or model that represents an ideal situation based on the real-world situation.

Nadia at Better Hires is still working on her presentation to the company's board of directors. Part of the presentation relates to an exponential function, and Nadia has to explain what happens in that function as time, the x-variable, increases. Fortunately for Nadia, she understands the concepts that this lesson covers. In this lesson you will learn that exponential functions grow proportionally and what happens to the instantaneous rate of change if a function grows or decreases exponentially. You will use this information to identify ideal situations for various real-world scenarios.

Which Function Grows Faster?

Nadia from Better Hires has been working on measuring and modeling her team's performance. The model for the team's performance, as measured by the number of advertisers, was h(x)=18×1.03x, which x is time measured in weeks. As xincreased, the instantaneous rate of change also grew. As in the Campbell Computer example, this gets a little unrealistic as time goes on. If x = 375, meaning the team's performance is in the 375th week, the instantaneous rate of change would be over 35,700. Therefore, a weekly increase in advertisers would be more than 35,700, which is far too large to be realistic. Despite the fact that exponential growth is usually not realistic indefinitely, exponential functions are reasonable models to use for a period of time until the growth rate gets too large. Being able to compare two exponential functions and their growth rates is important because growth rates have a significant effect on what is happening as time passes. Using the Better Hires example, look at what happens if the growth rate were 1.02 instead of 1.03. Here are the graphs of g(x)=18×1.02x (in red) and h(x)=18×1.03x (in blue). The function g(x) has the lower of the two growth rates. As x gets larger, the rates of change, both average and instantaneous, for g(x) are less than for h(x). For the instantaneous rates of change, you can use the applet below to see how g(x) (in black) consistently grows slower (has a smaller instantaneous rate of change) than h(x)(in blue) does. The line that passes through x=30 on h(x) has significantly greater slope than that of the line that passes through x=30 on g(x), which means that the instantaneous rate of change of h(x) is greater than the instantaneous rate of change of g(x) at that point. Because that is true for any point on h(x) after x=0, the function h(x) is always increasing faster than g(x). When x=0, the two functions have the same starting value, 18, but the increased growth factor for h(x)makes for a greater instantaneous rate of change for h(x). If two increasing exponential functions have the same starting point and different rates of change, one of the exponential functions will always increase faster than the other. Each time x increases, the function multiplies by a bigger number. This causes the exponential function with the bigger growth rate to increase faster. Decreasing exponential functions are not as simple, however. If two decreasing exponential functions have the same starting value but different decay rates, the function decreasing faster actually varies over time. As you can see in the next graph, the function m(x)=45×0.95x (in black) and the function n(x)=45×0.90x (in blue) are both decreasing. However, n(x) decreases faster only in the beginning; notice that n(x) has a more negative instantaneous rate of change than m(x) from x=0 to about x=13.3. However, at x=13.3, this trend changes and m(x) starts to decrease faster than n(x). Notice that m(x) has a more negative instantaneous rate of change than after about x=13.3. This may seem a bit counterintuitive. However, think of it this way: n(x)decreased so fast in the beginning that it has very little left as time goes on. This is why, for decreasing exponential functions, the function that decreases more in the long term (that is, the function with the larger declines in instantaneous rates of change) is actually the function with the decay rate closer to 1. However, just because that function decreases (decays) more in the long term doesn't mean it will have smaller function values. As x increases, which function is going to have the greater instantaneous rate of change: f(x) or g(x)? Explain your answe The function $g(x)$g(x) will have a greater instantaneous rate of change for large enough x-values because, for large enough x-values, the line through the x-value will be steeper on $g(x)$g(x) than through the same x-value on $f(x)$f(x) As x gets larger, which function will have a greater instantaneous rate of change: g(x)=100×1.21x or h(x)=100×1.18x? Explain your answer. The function $g(x)$g(x) will have a greater instantaneous rate of change. This is because 1.21 > 1.18. Lesson Summary In this lesson, you learned how to identify which of two functions increase or decrease faster in the long term by examining several examples, including Campbell Computers and Better Hires. Here is a list of the key concepts in this lesson: For two exponential functions that are increasing, the function with the greater growth rate, or instantaneous rate of change, will always increase faster as x increases. For all exponential functions that are decreasing, as x increases, the instantaneous rate of change gets closer to zero.

Given a real-world problem and a data set, regression model for the data, and corresponding coefficient of determination, identify any sources of error in a conclusion/solution to the problem not accounted for (specifically, low N-value, outliers, a low coefficient of determination, or improper extrapolations).

Netterly launched a new series this year with high hopes. It was the channel's fourth attempt to attract the desirable 25- to 34-year-old demographic, and this time, management thought they had found a winning concept: combining science fiction and comedy. However, Netterly's management knew that the data would tell the real tale. The data was coming in and a model was emerging. Once a set of data is collected, a regression is run, and a conclusion is reached, how can you determine if the conclusion can be relied on? How do you double-check the conclusion and look for possible errors? In this particular lesson, you will see how to check a model for four things: sample size, outliers, model strength, and extrapolations. Conveniently, these four items form the acronym SOME, making them easier to remember.

Graph of the Inverse Function

Next, you will explore the relationship between the graphs of a function and its inverse function. On a given day, one English pound can exchange for 1.6 U.S. dollars. The function P(d)=0.625d can be used to calculate the value of English pounds for d U.S. dollars. Since 1 U.S. dollar can exchange for 0.625 English pounds, P(d)'s inverse function is D(p)=1.6p. Examine their graphs: On the function P(d), point C(1.6, 1) implies $1.60 = 1₤. The corresponding point on the inverse function D(p) is A(1, 1.6), implying 1₤ = $1.60. Note that those two points are reflections of each other over the line y = x. Similarly, on the function P(d), point D(3.2, 2) implies $3.20 = 2₤. The corresponding point on the inverse function D(p) is B(2, 3.2), implying 2₤ = $3.20. Again, those two points are reflections of each other by the line y = x. Here is an important observation: If (x, y) is on a function f(x), (y, x) must be on f(x)'s inverse function f−1(x). Points (x, y) and (y, x) are reflections of each other by y = x. As a result, the graph of f(x) and f−1(x) are reflections of each other by y = x. You can verify this with the graphs of D(p)=1.6p and P(d)=0.625d.

Outliers and Functions of Best Fit

No data set is perfect and there are sometimes possible outliers that you can identify visually in a data set. A possible outlier is a data point that lies far away from the general trend of the data. Sometimes, these possible outliers occur due to rare circumstances that are outside the scope of the data. In such cases, the possible outlier is a true outlier and is removed from the data set. In other cases, the possible outlier occurs due to circumstances that are completely inside the scope of the data. In such cases, the possible outlier is not an outlier at all and is kept in the data set. In the next example, you will see some possible outliers, how they affect logistic regressions, and how to deal with them. Consider this example: Movie Mania started offering its members discounted movie tickets in 2000. The number of its memberships over the years are shown in the next graph, with a logistic function of best fit. The function is g(x)=3.13951+415.3809e−0.8645x and the coefficient of determination is r2 = 0.9339. There is a strong correlation between the number of years and the number of memberships when modeled by this logistic function. However, point K is obviously outside the general trend of the data set. Point K is called a possible outlier for this data set. Outliers frequently appear in real-life data. In this scenario, it turned out that Movie Mania ran a major promotion in late 2009 and then backtracked on its promises. Many users signed up and then canceled their memberships later. Due to the nature of this issue, the outlier point K can be ignored, because similar situations are unlikely to happen again. Said another way, point K reflected a large bump in the number of memberships outside of the company's normal business practices. If the company had not backtracked on its promises and continued doing business with the new promises made, then point K would not be a true outlier and you would keep it in the data set. However, given Movie Mania's decisions, you should remove point K from the data set. With the outlier removed, examine the new function of best fit and its coefficient of determination. With the outlier removed, the function of best fit becomes h(x)=3.03751+130.2802e−0.6898x, and the coefficient of determination improved to r2= 0.9928. Predictions made by h(x) are more trustworthy than those made by g(x), which was calculated with the outlier included. Notice that the maximum number of memberships predicted by this new model is slightly less than the old model (about 3.15 million memberships compared to 3.04 million memberships). One outlier can have huge impacts on the predictions of a model. When you analyze data, you should visually identify possible outliers and identify how they impact the regression equation and coefficients of determination associated with the function. If it is proper, remove the outliers and then recalculate the function of best fit. Also, it is sometimes difficult to know ahead of time how removing an outlier will impact the regression equation. The only sure way to know is to compare a regression equation that included the outlier to a regression equation that excluded the outlier. In this lesson, you identified outliers and the effect they can have on functions of best fit and coefficients of determination. You also learned what can legitimately be done about outliers. Here is a list of the key concepts in this lesson: A possible outlier in a data set negatively affects the regression function and coefficient of determination. Any possible outlier should be investigated to see why it occurred. If it occurred because of normal circumstances, it should be kept in the data set. If the possible outlier occurred because of abnormal circumstances, it should be removed from the data set. Removing a true outlier will always improve the coefficient of determination.

If a regression model has a strong r2-value, can you trust the results of the model? Explain your answer.

No, you cannot. A regression model with just 2 data points will show r2 = 1, so a strong r2-value alone is not enough to know you can trust the results of the model. You should always check four aspects for all models: SOME, or sample size, outliers, model strength, and extrapolations.

Gerald has been investing money in a retirement account since he started working at age 21. Some years he has made bigger contributions to his retirement account than others, but generally Gerald has put increasing amounts of money into the account. If you looked at the amount of money in his retirement account over the years, would there be an asymptote? Tyra is a weekend race car driver at a local speed track. When Tyra is practicing for races, she pushes her speed to the maximum and also drives on the track alone (so she does not have to slow down for anyone else on the track). If you looked at Tyra's speed over time in these practice laps, would there be an asymptote?

No. An asymptote occurs when the y-values tend towards a specific value, which means there is little or no change happening to the values of the variable. In this situation, Gerald is adding more and more money to the account, so there is no way this situation could have an asymptote. Yes. If Tyra pushes her speed to the maximum and does not have to slow down for anyone else on the track, her speed would push towards the maximum. This means the y-values—speed, in this case—would tend towards a certain value and stay there.

A regression model has an r2-value of 0.955. Does this alone mean it is a strong model? Which statement is true?

No. If there were only a few data points (say, three), then a high coefficient of determination is not meaningful and does not indicate a strong model. If a regression done with nine data points has a coefficient of determination of 0.95, the regression function can be used to predict data. There is no set standard on how much data is enough for a regression. When nine data points generate a good coefficient of determination, the regression function is likely good enough to be trusted.

The following graph models for the number of users, in millions, on a social media platform since 2000. The equation for this model is f(x)=2.94×e0.1x.

No. There is a possible outlier that needs to be explained or removed before this model is even interpreted. All possible outliers must be explained or removed from the data before interpreting a model.

Why logistics are not a good fit

Now Sarah considers logistic models. Logistic models are useful in the sense that they can reveal maximums in the past and in the future. However, this same aspect can be too limiting in some situations. Would revenue have a predictable maximum as time goes on? Probably not. She dismisses this option.

Real Outliers, Interpolation, and Extrapolation

Now examine how a real outlier impacts interpolation and extrapolation values. Blitz Digital Marketing was struggling with the costs of employee travel expenditures since 2005. In an effort to drive down employee travel expenditures, in 2005, Blitz Digital wrote new guidelines on what employees could claim as an expenditure and also offered more paid time off for employees who claimed travel expenditures of no more than $55 per trip. The campaign worked well, as you can see from the data in the following graph. The data seemed to be decreasing in an exponential pattern, so an initial exponential regression model was performed. See the red dotted line. Point J seemed to be a possible outlier. That year, 2014, had seen substantially fewer expenditures claimed by employees. Upon investigation, management discovered that this was the year that Blitz Digital's previous travel agency went bankrupt, which greatly reduced company travel for that year. That data point, point J, was removed as a true outlier. A new regression model was calculated without point J, resulting in the solid black curve in the graph. Based on these two models, was there a substantial difference in the predicted time for Blitz Digital's employees to reach an average of $55 of travel expenditures per trip? The initial model, with the outlier point J included, predicted that the $55 average travel expenditure goal would be reached by around t = 14, which would be in 2019. On the other hand, the final model, without the outlier, predicted that the $55 average travel expenditure goal would be reached by around t = 17, which would be in 2022. Three years is a pretty substantial difference between these two extrapolation values. Into which value should Blitz Digital put its faith? The answer is that the company should rely on the value from the final model, the model without the outlier. Outliers always change data and not always in a good way. The only thing to verify is the r2-value to ensure the company had produced a strong model that would support extrapolating so far out into the future. For this model, the r2-value was 0.72, indicating a strong fit. This means the furthest reliable extrapolation could occur at: xmax+(0.5×range)=12+(0.5×12)=12+6=18. Since the value found, t = 17, is within this limit, the company can have faith in this extrapolation value. However, keep in mind that this is the riskier of the two levels of extrapolation; that is, comparing the 25% of the range extrapolation to the 50% of the range extrapolation—this is in the 50% region. In general, always identify possible outliers before interpreting a model. If a possible outlier cannot be explained or removed from a data set, do not do anything with the model. Once possible outliers are accounted for by either explaining or removing them from the data, then you can move on to interpreting anyr- orr2-values for a model.

CPU Usage and Polynomials

Now look at a polynomial function in context. Johan manages web servers at Progress Hospital. He is testing a new application, which takes about 80 seconds each time the program runs. The application's CPU usage, u, measured in percentage, can be modeled by the function u(t)=−0.0125t2+t, where t is the number of seconds since the application begins to run. This is not a linear function since the variable t has an exponent of 2. This is the function's graph: You can also see the difference graphically here; this function curves, whereas linear functions do not. Compare this polynomial function's equation, u(t)=−0.0125t2+t, with a linear equation, like f(t)=−0.0125t+1 and you can see this polynomial function's independent variable has a maximum exponent of 2, while a linear function's independent variable has a maximum exponent of 1. A variable's maximum exponent in a polynomial is called the polynomial's degree. A linear polynomial always has a degree of 1, while a degree-2 polynomial is called a quadratic polynomial. But how are input-output pairs with polynomials like quadratics calculated? Since quadratics are more complicated than linear functions, quadratic functions require more caution in calculation compared to linear functions. Consider this example: Suppose Johan needed to know the amount of CPU resources (as a percentage) the application used at 10 seconds after it started running and at 40 seconds after it started. If you substitute t = 10 into the linear function, f(t)=−0.0125t+1 , you multiply first and then add. However, for the quadratic function that Johan is using, u(t)=−0.0125t2+t, you would not do multiplication first, because the order of operations says you have to do the exponent operation before multiplication: u(10)=−0.0125(10)2+(10)=−0.0125(100)+10=−1.25+10=8.75. Notice that in the expression, −0.0125(10)2 the exponent operation, (10)2=100, must be done before the multiplication operation. This result implies that the application would be using 8.75% of CPU resource 10 seconds after it started running. In the graph, it does look like the function crosses the point (10, 8.75), which confirms the calculation. Next, substitute t = 40 into u(t), and you have: u(40)=−0.0125(40)2+(40)=−0.0125(1600)+40=−20+40=20 The result implies that the application would be using 20% of the CPU resource 40 seconds after it started running. In the graph, the function does cross the point (40, 20). This is also the highest point of the arch, representing the maximum percentage of CPU resource this application uses when it runs. As you can see, calculating input-output pairs for quadratic functions is a bit more complicated than it is with linear functions. An equation that includes one variable raised to the second power (like x^2) and no greater exponent (3 or more); its name comes from Latin (quadratus) because the variable is squared. The application uses 8.75% of the CPU 70 seconds after it starts to run. This can be confirmed on the graph by looking at the coordinate (70, 8.75). Using the model above, u(t)=−0.0125t2+t, calculate the application's CPU usage 80 seconds after it starts to run. The application uses 0% of the CPU 80 seconds after it starts to run. Again, the graph confirms this with the coordinate (80, 0). By now, you have seen two of the easiest types of polynomial functions: linear and quadratic. A linear function has a degree of 1; a quadratic function has a degree of 2. In this lesson, you will model some real-life scenarios with cubic functions, which have a degree of 3. A cubic function can model more complicated data patterns than a quadratic function can.

Choosing the Best Model

Now practice finding the best function to fit some given data using a variety of short scenarios. Parking fees per hour at a city's downtown area over the past few years are shown in the following scatterplot, with a proposed linear function drawn in to model the data. It looks like the data visually fit the linear model very well. After studying the data, it is clear that the parking cost per hour has been increasing by about $0.25 every year on average, which also implies a linear model. The following example is one in which an exponential model would work well. The chart shows Intel's central processing unit (CPU) clock speed in megahertz (MHz) since 1970. An exponential function is proposed to model this data set: The curve fits the data pretty well. In addition, it is well known that Moore's Law has been fairly accurate from the 1970s to the 2000s, and Moore's Law predicts exponential growth in CPU technology. Yes, an exponential function is appropriate in modeling the data in this scatterplot. Mumina manages web servers for an online game company. A new game started last month, hosted on a server which can manage a maximum of 60,000 gamers. The next chart shows the average number of online gamers every day at the server in the past month, together with a polynomial function drawn in to model it. Is this polynomial a good fit? Although the data points fit the polynomial function pretty well, you must understand the scenario. The server can host a maximum of 60,000 gamers, which is why the data is pointing to a limit of 60,000. A logistic function can model data with a limit, so it is actually a better choice than a polynomial function for this scenario. The polynomial function will decrease when x-value becomes bigger. How about this next model for the height of a rocket's flight? NASA just launched a rocket and the height of its flight is modeled in this scatterplot, with a proposed exponential function drawn in to model the data. At first glance, there are fairly wide gaps between many data points and the proposed regression function. It may be communicated that "the exponential regression function does not match the general trend of the data." Perhaps an exponential function is not the best choice of functions to model this situation. In this case, a good alternative would be a polynomial model. This means this data should not be fit with an exponential function and a polynomial function should be calculated instead.

More Complex Data Patterns

Now that you are familiar with the basic patterns of data, it is time to move on to more complex data patterns. Begin by examining a new set of data on possible unemployment rates in the future: This data has been delivered to a federal commission responsible for long-range economic planning. The members are very worried about this potential trend that unemployment may increase at a steady rate of 1% every five years. Such a situation would create major negative impacts on the economy. Year Unemployment Rate 20602.8%20653.8%20704.8%20755.8%20806.8%20857.8% This data has been delivered to a federal commission responsible for long-range economic planning. The members are very worried about this potential trend that unemployment may increase at a steady rate of 1% every five years. Such a situation would create major negative impacts on the economy. The federal commission asked for an analysis based on this assumption: By 2060, robots will be a common part of our society. These robots will provide more labor hours over time (approximately 15 hours per week) and will then impact the economy. The following table is roughly what the commission expected the analysis to show: Year Gross U.S. Population(in millions) Citizens with One Robot Total Hours Saved Citizens with Two Robots Total Hours Saved Citizens with Three Robots Total Hours Saved 2060420.206%378,180,000 4%504,240,000 1%189,090,000 2065445.418%534,494,400 6%801,741,600 3%601,306,200 2070472.1412%849,846,096 9%1,274,769,1444%849,846,096 2075500.4616%1,201,115,81612%1,801,673,724 6%1,351,255,2932080530.4919%1,511,904,533 13%2,068,921,993 7%1,671,052,379 2085562.3224%2,024,360,59614%2,361,754,028 8%2,024,360,596 But this is what the analysis returned, which does not show nearly as many hours saved: Year Gross U.S. Population(in millions) Citizens with One Robot Total Hours Saved Citizens with Two Robots Total Hours Saved Citizens with Three Robots Total Hours Saved 2060420.206%378,180,000 4%470,624,000 1%189,090,000 2065445.418%534,494,400 6%748,292,160 3%601,306,200 2070472.1412%793,189,690 9%1,019,815,315 4%793,189,690 2075500.4616%1,121,041,42812%1,441,338,979 6%1,261,171,606 2080530.4919%911,110,897 13%1,217,209,461 7%1,136,841,903 2085562.3224%854,445,850 14%1,031,952,9548%1,019,488,477 Why is there such a difference? The analysts explained that the decrease in hours is a result of a probable decrease in efficiency. That is, each robot requires a certain degree of management, which has a compounding effect as more robots are owned. The people with three robots are spending more time organizing and supervising their robots' work than the people with only one robot. This is an example of what is called "a diminishing return." A diminishing return is often seen in computer projects in real life; adding staff increases productivity to a point, but there is an upper limit to that increase as each staff member also requires training and supervision. This is just one way that interpreting data can be more complicated than it appears at first glance. A diminishing return is often seen in computer projects in real life; adding staff increases productivity to a point, but there is an upper limit to that increase as each staff member also requires training and supervision. This is just one way that interpreting data can be more complicated than it appears at first glance.

Interpreting Solutions

Now that you have practiced solving these equations, you can turn to interpreting the solutions. Consider this example: Earlier, you read that the Scarlet Dragon's main dining room seats only 60 people and management wants to know what hours of the day the staff should plan on using the overflow area. You found solutions to c(t)=60 by either using the trace lines or estimating the coordinates (7.1, 60) and (9.2, 60), giving the solutions t ≈ 7.1 and t ≈ 9.2. But what do these solutions mean? These solutions show that the restaurant has 60 customers at about 5:06 p.m. and at 7:12 p.m. on a typical day. (As a review, to change 0.2 hours to minutes, it is 0.2 × 60=12.) With respect to the overflow area, this means that the Scarlet Dragon's staff should plan on using the overflow area between about 5:00 p.m. to about 7:00 p.m. With the Scarlet Dragon scenario in mind, solve for t in c(t)=10 and interpret its meaning in this context. The solutions are t ≈ 0.18 and t ≈ 10.3. These solutions imply that the restaurant has 10 customers at about 10:11 a.m. and at about 8:18 p.m. on a typical day. The points (0.18, 10) and (10.3, 10) are on the function's graph. The input value represents time since 10:00 a.m., and the output value represents the number of customers. With the Scarlet Dragon scenario in mind, solve for t in c(t)=30 and interpret its meaning in this content. The solutions are t ≈ 0.6 and t = 10. These solutions imply that the restaurant has 30 customers at about 10:36 a.m. and at about 8:00 p.m. on a typical day. The points (0.6, 30) and (10, 30) are on the function's graph. The input value represents time since 10:00 a.m. and the output value represents the number of customers. Lesson Summary In this lesson, you learned how to solve for the input given the output using a graph. Here is a list of the key concepts in this lesson: Estimating the associated input for a given output in a polynomial function is referred to as "solving" the polynomial function. Follow these steps to solve a polynomial function using a graph: Determine the output value you are looking for; usually, it will be specifically stated in the problem. Starting with the specific output value you identified, trace that value on the dependent variable axis to any associated coordinates on the graph. Trace from these associated coordinates to their corresponding values on the independent variable axis. Estimate these values on the independent variable axis. Check your solutions by plugging them back into the polynomial function and verify that you get the output value you identified in step 1. Polynomial equations can have multiple solutions at times. Multiple inputs can give the same output, but each input can only give one output. Interpreting the solution to a polynomial equation means remembering which function you found a solution to and then interpreting the corresponding values (independent and dependent variables) in a real-world context.

Using Instantaneous Rates of Change for Business Reports

Now that you have seen average and instantaneous rates of change, you need to know when to use one or the other. You will also see how they can both be used in the same situation but for different purposes. Consider this next example: Business professionals often encounter situations where both average and instantaneous rates of change are of interest. For example, suppose that the sales in your company, in thousands of dollars, are modeled by the graph below where t is the number of days into the new financial year. (Note: Sales are on the y-axis while the days are on the x-axis.) You are preparing a report for how well you did on your first quarter of business and you want to talk about the status of the company's sales the day the first quarter ended (when t = 90). An instantaneous rate of change when t = 90 would let you know how sales were growing or declining on that particular day; that is, how sales were increasing or decreasing on the closing day of the first quarter. This would be helpful to know what your immediate business plan might be going into the second quarter. Use the applet below to find the instantaneous rate of change when t = 90. You should have seen that the instantaneous rate of change when t = 90 was -0.58. This means the slope of the line going through t = 90 was -0.58. But how do you interpret that instantaneous rate of change for writing up the first quarter report? It means that sales were decreasing by about $580 dollars per day at the close of the first quarter. (Note: the sales were in thousands, which it why this is $580 per day and not $0.58 per day.) The equation h=−30.5t2+800t describes the height, h, in feet of a bullet fired from a gun up into the air t seconds after the bullet was fired. The instantaneous rate of change at t = 10 is 190. Interpret the meaning of the instantaneous rate of change. The height of the bullet is increasing 190 feet per second at the 10-second mark. Correct! The input is seconds and the output is feet so the answer is in feet per second. Lesson Summary In this lesson, you first learned how to calculate average rates of change, worked with the slope formula, and also interpreted average rates of change in the context of several problems. You then worked on interpreting instantaneous rates of change. You will not need to know how to calculate instantaneous rates of change by hand, but you do need to know how to interpret them. Here is a list of the key concepts you learned in this lesson: The average rate of change is a measurement of how the dependent (y) variable changes with respect to the independent (x) variable. Use the slope formula to measure an average rate of change: m=y2−y1x2−x1. Instantaneous rates of change are used to measure how one variable change with respect to another at a particular instant. Instantaneous rates of change can be used to measure how well economic markets are doing, how position or speed of things change over time (think of bullets, cars, etc.), and any other situation where two variables change and influence each other. An instantaneous rate of change is calculated as the slope at a "single point." Although two points are needed to calculate slope, technology can be used to estimate and interpret instantaneous rates of change in this course. The units for the average and instantaneous rate of change are given by the ratio of the y-variable units to the x-variable units, such as "seconds per megahertz."

Given a real-world scenario modeled by a logistic function, interpret why concave up or concave down would be optimal based on context.

Now that you have seen concavity in context for logistic functions, you can start to determine if concave up or concave down would be optimal for various logistic situations. For example, if you are trying to predict how many customers you can have in a certain city, having the number of customers increase faster and faster would be optimal, and it turns out that increasing faster and faster is concave up. In comparing concave-up or concave-down segments of the curve, a frequent question is which segment would be preferred, or optimal, in the context of the situation. You will deal with that question again in this lesson.

Matching Data and Graphs

Now that you have seen data tables, graphs, and inverse functions, you can learn how to match tables, graphs, and functions to one another. Remember, tables, graphs, and functions are just different representations of the same thing. For example, there is lots of research that goes into trying to predict the needs of the job market ahead of time; this is a rising field of study in IT. Look at the job projections in the next table. How could you match this table to its associated graph? Notice that in the table, the number of projected jobs is steadily increasing over time. This means graph A is not a good match since it shows the number of projected jobs decreases from 2021 to 2022. Verifying things like this is an easy way to eliminate certain graphs or to know if there was an error in a graph you have made. However, since both graphs B and C are increasing, like the data in the table, this technique doesn't help us figure out which of these graphs is correct. At this point, you must verify the data from the table on the graph itself. Graph B matches pretty well for a while, but once it gets to the data for year 2024, graph B doesn't match very well at all. The data table says that 2024 is projected to have about 4.1 thousand projected jobs, but graph B indicates only about 3.3 thousand projected jobs. Notice that graph C matches the data very well across all the years, so graph C is the correct graph of the data here. Review the graphs of functions and their inverses a bit more. Coming up, you will examine some graphs of an original function (in blue) and its associated inverse (in red). Each graph is also titled to indicate that it is an original, an inverse, or both. As you can see, there is a flipping between the graphs of functions and their inverses. The flipping actually occurs across the line y = x, which is the dashed diagonal line in each of these. It might seem odd that this is the "flipping line" for the inverse functions, but think about how the coordinates work for functions and their inverses. If (x, y) is a coordinate on the original function, then the coordinate (y, x) is on the graph of the inverse function. Algebraically, what you are saying is "swap the x- and y-values," which is why the line y = x becomes the "flipping line" for the graph of inverse functions. As you can see from all of the graphs above, if the point (a, b) is on the original graph, then the coordinate (b, a) is on the graph of the inverse function. Using that relationship, see if you can identify the graph of the inverse function for the following function.

Minimum and Maximum x-Values

Now that you have seen how to identify maxima and minima in general, you will transition to doing this on a graph. The key point here is there are two components to finding maxima and minima. First, you have to realize that a maximum or minimum refers to the value of the dependent variable, or the y-variable. Second, the place where a maximum or minimum occurs refers to the value of the independent variable, or the x-variable. With that in mind, consider this example: Eve has just called Al to schedule an extremely important online video meeting. She needs all the bandwidth she can get so the screen will not freeze and her voice will not crackle. Al provides the graph below which represents the typical bandwidth usage in the office (in percentage) as a function of time since 6:00 a.m. Using the following graph, what would be the best and worst times to schedule Eve's online video meeting? There is a maximum bandwidth usage around 7:15 a.m. (t = 1.25) and another maximum around 5:42 p.m. (t = 11.7); either of these would be bad times to schedule the meeting. The minimum bandwidth usage occurs about noon (t = 5.9); this would be a good time to schedule the meeting. The next graph reveals the process Al followed. To identify the maxima in this situation, look for the largest y-values, which both occurred at about 55% of total bandwidth usage (points P and Q). Then trace those values back to the x-axis to find the corresponding time values, about t = 1.25 and t = 11.75. To identify the minimum, Al followed a similar process, except for beginning where the y-values were lowest. The independent variable value here is about 5.9. There are four terms associated with maximum and minimum values:global maximum, global minimum, local maximum, and local minimum. Global maximum and global minimum represent the highest and lowest values the function will ever be; there can only be one global maximum and one global minimum. Local maximum and local minima refer to the highest and lowest value within a certain area, or interval, of the function; there can be many local maxima and minima. Keep in mind that a global maximum also is a local maximum by default; the same is true for global minima. The next graph is another version of Al's graph with all local and global maxima and minima labeled. The following table includes what each of these points would be: PointMaximum and TypeMinimum and TypePGlobal Max and Local MaxN/AQGlobal Max and Local MaxN/ARN/ALocal MinimumSN/AGlobal Min and Local MinTN/ALocal Minimum

Working with and Using the Coefficient of Determination

Now that you have seen some examples of how to visually identify a good fit between functions and data, you are going to look at a number that measures the fit of a function to data: the coefficient of determination. Each regression has a coefficient of determination, which is a measure of how well a function fits a data set. The coefficient of determination is a number between 0 and 1, with values closer to 1 indicating a strong fit and values closer to 0 indicating a weak fit. A coefficient of determination of 1 means that all data points are on the function's curve, which very rarely happens in real life. In general, a coefficient of determination above 0.7 implies a strong correlation between the data set and the function. Another way of thinking about the coefficient of determination is that it gives you an idea of how big a difference you can expect from the data points and the values predicted by the model. By mathematical convention, the coefficient of determination is represented by r2, so sometimes the coefficient of determination is referred to as the r2-value. In the following GeoGebra applet, the function's coefficient of determination is r2= 0.88. This means that the GeoGebra applet examined every 3rd-degree polynomial function and found that this particular 3rd-degree polynomial was the one that best fit the data. Moreover, this particular 3rd-degree polynomial fits the data with a "score" of 88%; thus the coefficient of determination value is 0.88. Said another way, any other 3rd-degree polynomial function would generate a lower coefficient of determination for this data set, meaning it would not fit the data as well. There are actually three categories to judge how strong a particular function is at modeling data: strong, moderate, and weak. In the upcoming table, you will see some guidelines on how to judge the strength, based on the coefficient of determination, or the r2-value. r2-ValueCharacterization0.7≤r2≤1strong model / strong correlation0.3≤r2≤0.7moderate model / moderate correlation0≤r2≤0.3weak model / weak correlation0=r2no model / no correlation These general guidelines help you interpret how well a function models a data set. For example, if a regression function has a coefficient of determination of r2=0.78, the function provides "a strong relationship to the data," or there is "a strong correlation between the independent and dependent variables What does a coefficient of determination of 1 imply? It implies that all data points are on the function's curve. Regression A has a coefficient of determination of 0.9 and Regression B has a coefficient of determination of 0.8. Assuming both regressions fit the general trend of the data, which regression should be used? Choose Regression A, because its coefficient of determination is closer to 1, indicating a better fit to the data. Which regression would you expect to have a coefficient of determination of 1? A coefficient of determination of 1 is only possible if the regression function, shown here as the red curve, goes through all the data points. Lesson Summary In this lesson, you learned the basics of data regression. Each regression has a curve of best fit, which has the highest coefficient of determination among all possible curves, of the same polynomial degree, to model a data set. With a function to model the data, you can predict values. Here is a list of the key concepts in this lesson: The least-square regression (LSR) algorithm, the most commonly used regression algorithm, or process, is used to calculate all regression functions. When evaluating how well a regression function fits the data, look at how well it fits the general trends. The regression function should not behave much differently on the graph, either before the data starts or after it ends. The coefficient of determination, r2, is a number between 0 and 1 that measures the strength of the fit between the regression function and the data. Characterize fit by using r2 to determine which of these categories it matches: 0.7 ≤ r2 ≤ 1: strong model / strong correlation 0.3 ≤ r2< 0.7: moderate model / moderate correlation 0 < r2 < 0.3: weak model / weak correlation 0 = r2: no model / no correlation For n-1 turns in the data, use a polynomial of degree n for the regression function. In general, do not judge the fit of a model based on one point. If the model does not seem to fit one point but fits the overall trend well and the coefficient of determination is strong or moderate, then you can use the model.

Meanings of Functions

Now that you know how to calculate both linear functions and their inverses, it is time to further explore their applications. You will revisit several previous scenarios to consider whether a linear or an inverse function is more appropriate in a particular situation.

Converting to Data Points

Now you are going to change direction a little, looking at starting with a function or coordinates for a situation and then interpreting them in a real-world context. This is an important skill because it allows you to apply the mathematics you are learning to real-world situations—when you learn to do that, you will realize how math can model the world and use it to convey important information. Suppose you are looking for a new cell phone carrier. A new company offers you a flat rate fee of $50 plus $20 per line. You could use the function B(x)=20x+50 to model this, where x represents the number of lines and B represents the monthly bill. Note that B(x)=20x+50 is a linear equation. The number in front of the variable is called the line's slope, meaning $20 per line in this situation. A line's slope is always a rate of change with the keyword "per." The number 50 is called the line's y-intercept, because the line crosses the point (0, 50) on the y-axis. The y-intercept can also be understood as the starting value. In this situation, the flat rate fee of $50 is the starting value because you would start with paying $50 with no phone line purchased yet, and then pay $20 per line. Suppose you needed four phone lines. Then you would calculate the total cost like this: B(4)=(20×4)+50=80+50=130. in these calculations—multiplication and division are simplified before addition and subtraction. You will see more details on the order of operations in later lessons. Now suppose you have only $100 a month to spend on your cell phone bill. In that case, you cannot afford 4 lines after all. The issue then might be to find a plan that fits your budget. You substitute values into the function and find a situation that might work. The following table summarizes substituting some values into the function to find the optimal solution: Function Input-OutputCoordinate PairsViable Solution?B(1)=(20×1)+50B(1)=20+50B(1)=70(1, 70)YesB(2)=(20×2)+50B(2)=40+50B(2)=90(2, 90)YesB(3)=(20×3)+50B(3)=60+50B(3)=110(3, 110)No It looks like 2 phone lines is the most you could afford, with $10 left over since you are spending only $90.

Extended Rate of Change

Now you will analyze limitations on a model by examining the function's extended rate of change; that is, when x gets very large. Best Movie Rental's business has been declining since 2000 due to the development of internet streaming technology. Fewer and fewer people rent movies in a store. Suppose the number of Best Movie Rental's stores nationwide can be modeled by the function B(t)=25.2t2−625.97t+4092.21, where t is the number of years since 2000. This function is depicted in the following graph. [The graph plots Years since 2000 on the horizontal axis and Number of stores on the vertical axis. An upward opening parabola labeled B of t equals 25.2 t squared minus 625.97 plus 4092.21 passes through (0, 4100) and (25, 4500), with its vertex at (12.5, 200). ]© 2018 WGU, Powered by GeoGebra As the variable t gets larger and larger, you can see the function's rate of change become larger and larger. This implies that the number of the company's stores would increase at a faster and faster rate. This does not make sense since it is likely that Best Movie Rental will be out of business at some point if business is truly declining as the data suggests. This polynomial regression model should probably only be used for values between t = 0 and t = 10. Better yet, a logistic model should be considered since it would likely give a more meaningful representation of the long-term situation for Best Movie Rental. In the next scenario, Intel's CPU speed in MHz since 1970 can be modeled by the exponential function S(t)=1.34e0.22t, where t is the number of years since 1970. This function is depicted in the following graph. [The graph plots Number of years since 1970 on the horizontal axis and CPU speed in Megahertz on the vertical axis. A curve labeled S of t equals 1.34 e to the power of left parenthesis 0.22 t right parenthesis rises almost horizontally at first through the origin, then at about (30, 1000) turns to rise almost vertically through (40, 9000). ]© 2018 WGU, Powered by GeoGebra As the variable t reaches positive infinity, the exponential function's rate of change becomes larger and larger and finally becomes infinitely large. This implies that Intel increases its CPU's speed by an increasingly large amount every year, which has not been true. Using historical data, the exponential regression function should only be used between t = 0 and t = 35. This is another model where the long-term rate of change seems to imply that the wrong model was used. Perhaps a logistic model would be more appropriate here.

Working with Functions

Now you will interpret a function for a given value. Remember, a function takes an input, applies a rule, and produces a unique output. Recall the software sales representative who makes a base salary of $40,000 a year, plus 10% commission on sales. Her annual earnings are modeled by the following function: E(x)=$40,000+0.10x, where E represents her earnings and x is the amount of sales. This employee wants to predict her earnings if she sells $100,000 in product. E(100,000)=$40,000+0.10($100,000) =$40,000+$10,000 =$50,000 Therefore, if the rep sells $100,000 in product, her annual salary will be $50,000. Recall the tech support specialist who takes 15 calls each hour. The number of calls in a workday can be modeled by the function: (h)=15h, where h is the number of hours worked. Suppose this tech support specialist plans to work nine hours on Monday. How many calls can he expect? Nine hours becomes the input, so plug 9 into our function for h. C(9)=15(9) C(9)=135 The tech support specialist can expect to handle 135 calls on Monday.

Constant Ratio

Occurs when the numeric relationship between two quantities - that is, how many times one number contains the second - remains unchanged over time.

Introduction to Logistic Functions

Of the functions in this course, logistic functions are the most complex, algebraically speaking. That means you will need to take care when calculating input-output pairs with logistic functions. You will practice calculating those in this next example. Before getting into logistic functions, think about power outages. If you have ever been in a power outage, you know how it can take a long time for everyone to have their electricity restored. If you think of the percentage of people with restored power, this variable is bounded below by 0% (nobody with power restored) while it is bounded above by 100% (everyone with power restored). It is the fact that there are two limiting factors to this variable that make it a perfect example to use a logistic function to model. With that in mind, consider this example: A hurricane recently hit Rock City. Rock City Power Company (RCPC) is working hard to restore power as quickly as possible, of course. During the first few hours, RCPC restored power to hospitals, traffic signals, and other critical infrastructure. In the next few hours, power was restored to a few neighborhoods, and the rate of restoration increased rapidly until most of the affected population was out of the dark by about 15 hours after the storm had ended. The following is the graph of a function modeling the percent of power restored after the hurricane hit Rock City. From t = 0 to about t = 7, it looks like the function grows exponentially. However, since the percentage of power restoration is limited at 100% (that is, Rock City's power cannot be restored to more than 100%), the function cannot keep increasing and its rate of change starts to slow down after t = 7. By t = 20, most power has been restored, and the function barely increases at all after that. In this example, the function's formula is: P(t)=100 1+50e−0.5t , where P(t) models the percentage of power restored, and t is the number of hours since the hurricane stopped. Notice that the logistic function has an exponential function in the denominator of a fraction. This means that you can use some of your skills with exponential functions to make working with logistic functions a little easier. To find out what percentage of power was on when the hurricane stopped, you can substitute t = 0 into P(t) and get: P(0)=100 1+50e−0.50 =100 1+50e0 =100 1+50 × 1 =100 1+50 =100 51=1.96. The result implies that when the hurricane stopped, only 1.96% of the city's power was still on. This must have been a major hurricane. What if you wanted to know how much of the city had power about 24 hours after the hurricane? To answer that, you can substitute t = 24 into P(t) and get: P(24)=100 1+50e−0.5×24 =100 1+50e−12 =100 1+50 × 0.00000614 =100 1+0.00030721 =100 1.00030721=99.97. After 24 hours since the hurricane stopped, about 99.97% of the city's power has been restored. Keep in mind that this is just a model and the numbers are not meant to be perfect; this means it can be inferred that 100% of the city has restored power somewhere around 24 hours after the storm. The general formula for a logistic function is: f(x)=L 1+Ce−kx + m The value of L + m is the maximum value the function approaches, while m is the minimum value the function approaches. The minimum value is sometimes referred to as the lower limit and the maximum value as the upper limit. In the Rock City electricity example, the function P(t) approaches 100%, implying full restoration. This means 100% is the upper limit here. Also notice that P(t) has nothing added after the fraction, meaning that the minimum, or the lower limit, is 0 here. That makes sense since 0% is the smallest percentage of the city that could have their power restored. The values of C and k affect how the logistic curve changes in different situations. In general, you do not need to worry about interpreting these values. The constant e, whose value is 2.71828..., frequently appears in math, especially with exponential and logistic functions.

Horizontal Asymptote

On a graph, a horizontal line parallel to the x-axis that a function's curve approaches but does not touch, even to infinity.

Origin

On a graph, the starting point where both x and y are equal to zero, and the x-axis and the y-axis intersect.

Limitations on Models

One limitation on regression functions is that you cannot assume the function will work for all x-values. Any function is only useful within certain limitations of its variables. These limitations are also called constraints. Here are a few examples. Best Movie Rental's business has been declining since 2000 due to the development of internet streaming technology. Fewer and fewer people rent movies in a store. The number of Best Movie Rental's stores nationwide can be modeled by the function B(t)=156(1+0.02e0.44t), where t is the number of years since 2000. The following is a graph and the data set from 2000 to 2009 that was used to generate the regression function: [A graph shows a curve that falls through (negative 2, 160), (2, 150), (10, 60), and (22, 0). A series of data points closely follow the curve. ]© 2018 WGU, Powered by GeoGebra By this model, in 2000, the number of Best Movie Rental stores nationwide was B(0)=1561+0.02e0.44×0=1561+0.02e0=1561.02≈153. In 2010, the number predicted by the model decreased to B(10)=1561+0.02e0.44×10=1561+0.02e4.4=1561+1.629...=1562.629...≈59. Those two numbers, 153 and 59, are very close to the real numbers of stores in 2000 and 2010. However, can this model accurately predict the past and the future? In 2020, the predicted number of Best Movie Rental stores nationwide would be B(20)=1561+0.02e0.44×20=1561+0.02e8.8=1561+132.684...=156133.684...≈1. Having only one store left still seems likely since the movie rental industry is dying out. Looking into the past when Best Movie Rental's business was booming, about 1990 (t = -10), the model produces this result: B(−10)=1561+0.02e0.44×−10=1561+0.02e−4.4=1561+0.0002...=1561.0002...≈156. It is unlikely that Best Movie Rental had only 156 stores in 1990, three more when business was booming than in 2000. This example shows that no model is perfect. Here, the function B(t) can only reliably be used for predictions within limitations. For those data values from t = 0 to t = 9, the fit is very strong to the data. Notice the high r2-value of 0.95. Looking to the future, it seems the model predicts well the values for those years, too. But looking to the past, the model falls short. It would be important to collect more data from the past on Best Movie Rental stores to predict more accurate values in the past. Here is another example. Intel's central processing unit (CPU) speed in megahertz (MHz) since 1970 can be modeled by the exponential function S(t)=1.34e0.22t, where t is the number of years since 1970. For a long time, exponential models were used to model things like speed in the IT world. In fact, this model worked very well until about 2005. However, this model predicts Intel CPU's speed in 2017 as S(47)=1.34e0.22(47)=41,468MHz. Over 41,000 MHz is far above the speed of the fastest CPU ever produced by Intel in 2017, which is 4,400 MHz. It could be argued that an exponential model is not appropriate to use here since there is a natural limitation on CPU speed. However, sometimes natural limitations are not visible ahead of time and they must be accounted for after they are discovered. Keep in mind that regressions are just a tool to get an objective "best guess" on where trends might lead in the future. That is why a coefficient of determinationof 1 should make you suspicious; no model ever realistically fits the data that well. Look at one more example. In 2000, 20 rabbits were accidentally released on an island and they quickly began reproducing. The island is small with limited resources, so the rabbit population did not increase exponentially. The number of rabbits on the island can be modeled by the logistic function R(t)=4,2051+203.24e−0.52t, where t is the number of years since 2000. From this model, the number of rabbits on the island in 2010 should have been about R(10)=4,2051+203.24e−0.52(10)≈1,982. Notice that 1,982 is close to the number of rabbits estimated by local environmentalists. If this trend continued, the number of rabbits in 2015 would have been R(15)=4,2051+203.24e−0.52(15)≈3882. However, in 2012, the local government took steps to address the rapidly increasing rabbit population. Some foxes were released into the wild, and the foxes successfully controlled the rabbit population. In 2015, the number of rabbits on the island had decreased to approximately 1,400. The government's action rendered the original model's prediction for 2015 completely meaningless. Therefore, this particular regression model can only be used reliably for the period from 2000 to 2012. This example illustrates that models can accurately predict the future only if everything stays as it is. The moment there are changes in the situation or the variables, there must be changes in the model. That means that any old models must be thrown out.

Number of Data Points

One limitation that should always be considered before using a model is the amount of data. In the following example, you will see how some data sets do not have enough data for us to use them to calculate a regression equation. Consider this example: The following scatterplot displays data on the number of daily online gamers for Instinct Fighters since January 1. Maria is using a linear regression to model the data. Maria collected only two days of data before she ran this linear regression. Notice that the coefficient of determination is a perfect 1. Does this imply that Maria has found the perfect function to model the number of online gamers? Probably not. Maria also tried to model the data with exponential, polynomial, and logistic functions displayed in the following three graphs. In all four types of regressions, the coefficient of determination was always a perfect 1, implying that all data points are an excellent fit to all the functions. This is because there are only two data points, which fit any type of function. This perfect coefficient of determination is meaningless due to the small number of data points. There are no set standards on how much data is enough; modeling data is simply a matter of dealing with estimates and "best guesses."Regressions do provide important information, but having enough data to back up those "best guesses" is a bit of an art. Essentially, the less clear the pattern is for a regression, the more data is needed. You need at least enough data to see what a general pattern might be. For the purposes of this course, about ten data points are needed to produce a meaningful regression.

Comparing Two Scenarios

One of the more insightful ways of comparing rates of change is comparing two different scenarios. For example, Clay is a healthcare information manager and is researching which healthcare app his hospital is going to adopt. So far, Clay has narrowed down the options to two healthcare apps and has data on the number of users in healthcare markets similar to his. In the following applet, the black curve represents the number of users for the first healthcare app while the grey curve represents the number of users for the second healthcare app; the independent variable for both is the number of days since the apps launched. The two lines represent the instantaneous rate of change for each app. The makers of the first app (the black curve) promise Clay that he will have more users in the first 50 days of using their app than he would with their competitor (the grey curve). According to these projections, this is true since the black curve goes through the coordinate (50, 32) while the grey curve goes through the coordinate (50, 13). The instantaneous rates of change are also similar for both curves at this point. The black curve has an instantaneous rate of change of 1.09 at x = 50, while the grey curve has an instantaneous rate of change of 1 at x = 50. This means both applications are adding about 1 user per day at 50 days since launch. All of this data might appear to indicate that the first app (the black curve) is better, but if you keep looking at the instantaneous rate of change over the next few days past day 50, the second app starts to add more and more users. This means that the total number of users for the second app starts to climb faster and outpace the total users for the first app. Clay is surprised to see this trend in the data until he finds out that the second app has ongoing tech support while the first app only offers tech support for the first 30 days. In summary, the instantaneous rate of change gives Clay a tool to see how the two apps add new users and that knowledge can help Clay make a better long-term choice for which app will be used more by patients at his hospital. The administrators at Clay's hospital are intrigued by the idea that the second app will do so much better than the first app based on the data Clay shared with them. They ask how many new users would be added on the 100th day for each app. Find and interpret this rate of change using the following graph. At day 100, the first app will add about 2 users per day while the second app will add about 3 users per day. At day 100 (x = 100 in the graph), the first app has an instantaneous rate of change of 2.09 while the second app has an instantaneous rate of change of 3. This means on the 100th day, the first app will be adding about 2 users per day compared to 3 users per day for the second app.

Inverse

Opposite in position, direction, order, or effect.

What is an outlier? A data set was collected on the number of traffic rule violations at an intersection on weekdays from 4:00 p.m. to 6:00 p.m. What might be a good reason to remove an outlier from this data set?

Outliers are data points that are distinctly separate from the others in a data set for reasons beyond the data. Said another way, outliers are any point that lies outside the general trend of the data due to external influences. On a holiday, you would expect less traffic, and thus, fewer traffic rule violations. You would also consider this an external influence to the data; thus, why this would truly be an outlier.

How would outliers affect a regression's coefficient of determination? The following scatterplot has a regression line. Which statement about points A and B is correct?

Outliers would cause the coefficient of determination to be smaller (or further away from 1). Points A and B cause the regression line's coefficient of determination to be further away from 1. The coefficient of determination measures how far data points are from the regression function.

A Decreasing Decrease

Over the last couple of lessons, you have explored a quandary faced by Campbell Computers involving its new processor. The model for computing time in seconds for x processors is modeled by the function c(x)=7×0.93x. The following applet graphs the function and its instantaneous rate of change. What do you notice about the instantaneous rates of change as x gets bigger? You probably noticed that as x gets bigger, the instantaneous rate of change gets closer and closer to 0. The function is decreasing faster at the beginning, when xis closer to zero. This fact is due to the proportional nature of multiplying. Each time thatxincreases, the value of the function decreases. This means that asxincreases, 0.93, the base of the exponent, is being multiplied by a smaller and smaller number. Any time that the base of an exponent is between 0 and 1, the function is decreasing. This makes the change in differences smaller, as seen in the table below. Notice that the difference between rows 1 and 2 is -0.46 and that the difference between rows 4 and 5 is -0.37. xf(x)Difference from Previousf(x)07.000.0016.51-0.4926.05-0.4635.63-0.4245.23-0.4054.86-0.37 The differences are negative because the amounts are decreasing. The differences are also getting closer and closer to 0. The closer the x-value is to 0, the more significant the difference, and this is true for all decreasing exponential functions. The closer the growth factor is to 0, the faster the function will decrease when x is closer to 0. This is because multiplying by a number closer to 0, like 0.3, causes a greater decrease than multiplying by a number further from 0, like 0.9. As x gets bigger, every decreasing exponential function has an instantaneous rate of change that gets closer and closer to 0. What does this mean in terms of comparing two situations with decreasing exponential functions? At times, the distinction between two options when looking for the "optimal" solution can be unclear. For example, say that one of Campbell Computers' competitors, Raja Computers, can do the same job a little bit better than Campbell can. Raja Computers' model for computing time in seconds for x processors is modeled by the function r(x)=8×0.90x. Since the growth factor for Raja Computers' model, 0.90, is smaller than that for Campbell Computers, 0.93, Raja Computers will outperform Campbell Computers in the long term. The following graph depicts the two functions and their instantaneous rates of change. From a numerical standpoint, Raja Computers is the clear winner as the number of processors gets larger. However, is it cost-effective to add more and more processors to achieve a shorter computation time? Maybe, but deciding about that would require more data. It is important to be able to identify optimal situations or solutions for a problem, but it is also important to realize the limitations to these sorts of analyses. Just because something may be optimal from one perspective does not mean it will be optimal from a different perspective. That said, in this course you will focus on just finding the optimal solutions based on the mathematics. Just do not forget to consider other factors important in your field when you put these concepts to work in your discipline. A Note about Concavity If you worked through the polynomial unit, you spent a fair bit of time thinking about concavity. You answered a lot of questions about when a function is concave up or concave down. With exponential functions, concavity is not an issue, as exponential functions are either always concave up or always concave down. Lesson Summary In this lesson, you learned about the significance of increasing and decreasing exponential functions, noting how the differences change as the x-values in examples involving Better Hires and Campbell's Computers. Here is a list of the key concepts in this lesson: Exponential functions grow proportionally. The key to the rate of change for an exponential is the growth factor. In general, in an exponential growth function f(x)=a×bx, the growth rate is b, the base of the exponent. If a function grows exponentially, then the instantaneous rate of change will grow as x gets bigger. The greater the growth rate, the greater the instantaneous rate of change as x gets bigger, regardless of the initial value. If a function decreases exponentially, then the instantaneous rate of change will get closer to 0 as x gets bigger. The smaller the growth rate, the more significantly the function's instantaneous rate of change will be when x is close to 0, regardless of the initial value. Previously, you saw that Campbell's modeled the time needed for a computation by t(x)=7×0.93x. In the long term, would Raja Computers more effectively beat Campbell's time by decreasing the initial value, 7, or by lowering the decay rate, 0.93? Explain your answer. In the long term, lowering the growth factor from 0.93 will cause the processing time to decrease faster than lowering the initial value of 7. Changing the growth factor causes a more significant change. Tom says that neither change to the model in the previous question would substantially affect the instantaneous rate of change for the five-hundredth processor installed. What does Tom mean by this statement? Tom is saying that as the number of processors x grows, the rate of change for any decreasing exponential function gets closer and closer to 0. So, by the time you get out to adding processor number 500, the changes to the initial value or the decay rate are so small, they are negligible.

In the following graph, how would point A affect the regression function? In the following graph, how does point B affect the regression function?

Point A makes the arch narrower. Without point A, the arch would be wider. Point B makes the arch wider. Without point B, the arch would be narrower.

Which statement about possible outliers and interpolation and extrapolation values is true?

Possible outliers affect both interpolation and extrapolation values, but interpolation values are less affected. While both interpolation and extrapolation values are affected by possible outliers, extrapolation values are more affected.

Instantaneous Drop in Value

Previously, you examined the changing value of Mappit's stock. On one particular day, the value was decreasing by 2% per hour. Its initial value that day was $32.50. The stock's value, y, can be modeled by the equation y=32.50×0.98x after x hours. You calculated the average rate of change for the first four hours of trading like this: Step 1: Substitute 0 and 4 in for x in the equations: f(0)=32.50×0.980=32.50 and f(4)=32.50×0.984=29.98. Step 2: Set up and calculate using the formula: 29.98−32.504−0=−0.63. But what if it were more important to figure out what was happening at an instant instead of over a period of time? In this example, when x = 1, the instantaneous rate of change is -$0.64. You can see this in the following graph. How do you interpret the meaning of this graph and its associated instantaneous rate of change? To answer that question, the first step is to determine the unit for this problem. The value of the stock is in dollars and the time is in hours, which means that the unit for the instantaneous rate of change are dollars per hour. Second, the negative in front of the 0.64 needs to be considered. This means that Mappit's stock was losing value. Therefore, the instantaneous rate of change can be interpreted as "one hour into trading, the stock was losing $0.64 of value per hour." Look at another example. When x = 8, the instantaneous rate of change is -$0.56. What does this mean? The instantaneous rate of change here means "eight hours into trading, the stock was losing $0.56 of value per hour." The stock was still losing value, which is bad, but it was losing value more slowly than it was earlier in the trading day, since losing $0.56 per hour is a lesser loss than losing $0.64 per hour. Take a look at how this appears in the next graph. Move the point on the graph to see the instantaneous rate of change for different points. As x gets larger, what is happening to the instantaneous rate of change of the function y=32.50×0.98x? How can this be seen happening in the graph? The instantaneous rate of change is increasing or getting closer to zero. As x gets larger, the slope of the line through the point gets closer to zero. A tablet dissolves in water. The exponential function ​g(x) models the weight of the tablet in grams x seconds after it is dropped in water. The instantaneous rate of change when x = 10 is -0.33. What does this rate of change mean? It means that at second 10, the tablet is losing 0.33 grams per second.The rate of change is known for this one instant. The following graph models the world's population over time, x. According to the model in the graph, the instantaneous rate of change is increasing as x increases. What does this mean in the context of the world's population? It means that the world's population is growing faster and faster each year.Each year, the instantaneous rate of change will be greater than the previous year, meaning that the population growth is increasing faster. Lesson Summary In this lesson, you examined the information that comes from calculating an instantaneous rate of change using Mappit's website visits and its stock price drop as subjects. Here is a list of the key concepts in this lesson: Instantaneous rate of change measures how two variables are changing with respect to one another in a particular moment as opposed to over a period of time. Instantaneous rate of change requires a focus on units, similar to average rate of change. Interpreting instantaneous rates of change for exponential functions is the same as for polynomial functions

Identifying a Popular Game to Invest In

Previously, you have compared instantaneous rates of change for two values on the same graph. Now you will learn to make predictions based on instantaneous rates of change at the end of different functions. Whether the situation happens in the world of information technology, business, or just everyday life, being able to consider different scenarios can help people make decisions and plan for the future. Consider this example concerning Zonkers, a game being developed by Clever Apps. Here are two scenarios of what might happen in terms of the number of downloads by users. In one test market, Clever Apps did not spend any money advertising the game and relied solely on social media to promote the game. The following is a graph showing the number of downloads each week for Zonkers. In the first graph, you can see that the instantaneous rate of change toward the end of the function became slower and slower until it was close to zero. This means that Zonkers has not been getting many downloads for quite some time. This is certainly not an optimal situation for Clever Apps. In another test market, Clever Apps took a different approach. They used social media to promote the game while also spending some money to advertise the game on YouTube. In the next graph, you can see that as time passes, the instantaneous rate of change slowed down but the trend remained positive. Zonkers downloads are continuing to increase, although the instantaneous rate of change seems to be getting smaller. Recall that when a line measures the instantaneous rate of change, it measures the rate of change of the graph at a single point. The line is said to be tangent to the curve. When a line measures the average rate of change and passes through two points on the graph, the line is referred to as a secant line Overall, this second situation would be much more welcome by Clever Apps and is in many ways an optimal outcome. The first situation had a greater initial surge of downloads but then fewer and fewer until no more were occurring. The slower but steadier growth in the second situation was far preferable to Clever Apps..

Turning Graphs into Notation

Rachel is opening her own taxi company and is presenting a proposal to the bank for a loan. Part of what she needs to explain is the initial cost for a taxi ride and the additional cost per mile. Examine the following graph: How should Rachel make a table based on the data on this graph? First, she would find an x-coordinate and its corresponding y-coordinate. Next, she would take these points and input them into her table as follows: Miles TraveledTotal Cost (In Dollars)0515.7526.5037.25 Notice that either way, whether you look at the graph or at the table, you can easily tell that a two-mile trip would cost $6.50. Both visual representations display the same information. How much would a three-mile trip cost? Yes, it is $7.25. Function is an important concept used throughout math. You will learn how to use function notation to represent the data in this section. A function has an input value and an output value. In our example, the cost of a trip depends on the number of miles driven, so the number of miles is the input value and the cost is the output value. Function notation looks like f(input)=output, or f(x)=y, where f is the name of the function. Without context, you should use default function names in math: f, g, h, p, q, s, t, etc. It's better to choose meaningful names for both the independent and dependent variables. In Rachel's example, the dependent variable is cost, and the independent variable is the number of miles, so you should choose C(m) to represent this function, where C is the cost and m is the number of miles. Note that C(m) does not mean variable C multiplied by variable m. You have to understand variables in context. In function notation, C(1)=5.75 implies the cost of riding the taxi for 1 mile is $5.75. Note that C(1) is the corresponding y-value (output value) when the x-value (input value) is 1. Without function notation, you have to write y = 5.75 when x = 1, which is not as efficient as C(1)=5.75. Similarly, you can translate data in the table into C(0)=5, C(1)=5.75, C(2)=6.5, C(3)=7.25.

When Polynomial Models Do Not Work

Rate of change is a way of asking for the slope in a real-world problem. Instantaneous and average rates of change tell you how variables are changing over time. It can be good to get an idea of how things change over time. For example, what if you could predict the rate of return on your retirement account? Looking at your rate of return 20 years down the road can help you see if you will have enough money when you retire. From an IT perspective, these sort of forecasts for the future can help when you want to predict how much tech support a certain app or specific network might need as time goes on. Polynomials are not the best for this because they either grow or decline rapidly as your x-values get larger. Polynomials are best for modeling data that has several ups-and-downs (or "turns"), but once you get past all the "turns," polynomials sometimes lose their power in modeling phenomena. For example, Jennice is a freelance programmer whose income varies each month. Her earnings for the first year are found in the following table: x = monthF(x) = income12,90023,80033,80045,20054,70064,50074,03085,00094,154104,867113,657124,000 Though there are only 12 months of earnings in the table, the polynomial F(x)=4.9x3-137x2+1101x+1994 was created to approximate the programmer's earnings (you will learn in Unit 8 how these functions are "created"). If you plug in some of the months included here (such as x = 8), you will see that the polynomial does a pretty good job of estimating Jennice's income for that month. F(8)=4.9(8)3-137(8)2+1101(8)+1994=4.9(512)-137(64)+1101(8)+1994=2508.8-8768+8808+1994=4542.8 The following graph represents the same polynomial including the monthly income data above. You can see in the graph that the data coordinate (8, 5000) is very close to the coordinate on the function (8, 4542.8), showing that this polynomial does do a pretty good job estimating Jennice's income. But what about Jennice's long-term income? According to the model, it looks like Jennice's rate of change for her income goes up substantially starting in month 14. Unfortunately, Jennice can probably not count on her income increasing substantially in month 14. The phenomena is common with polynomials. Recall, polynomials are good at modeling data that has many "turns," but polynomials always grow large (either positive or negative) after a while, which means they are not always the best functions for determining longer trends. That said, when presented with two polynomial models, you should still be able to determine which one will increase or decrease at a faster rate in the longterm.

Short-Term Versus Long-Term

Rate of change shows how much something has changed over time. When you have a linear function, the slope, or rate of change, is constant; it never changes. Therefore, the short-term rate of change and the long-term rate of change would be exactly the same. But if the function is a curve, like a 2nd-degree polynomial or higher (such as in the following graph), both the average and instantaneous rates of change will vary. Thus, the short-term and long-term rates of change will likely be different. Why is this knowledge important in a field like business? Comparing short-term versus long-term rates of change provides a glimpse into what may happen in the near future, as well as the more distant future success of a product, a marketing campaign, or a business as a whole. For IT, comparing short-term versus long-term rates of change provides insight to how team resources are being used or how hardware resources are used by computer programs. Consider this scenario: a business is trying to expand to a new location in another city. Currently, they are considering expanding in Chicago or Indianapolis. Based on some market research, they have the following two projections for profit per month for the first year in the two locations. Which one seems like it would be a better city to expand to: Chicago (represented by the red curve marked C) or Indianapolis (represented by the blue curve marked I)? Initially it looks like Indianapolis is the better city to expand to since the profit per month is consistently higher in Indianapolis. But after month 7, it looks like a Chicago location might outperform an Indianapolis location. To quantify just how much better the Chicago location is, use an average rate of change to see how the two locations compare from month 7 to month 12. An estimate of the coordinates for the Indianapolis function would be (7, 3.4) and (12, 5.3), which would result in an average rate of change of: m=5.3−3.412−7=1.95=0.38 This indicates that from month 7 to month 12, the Indianapolis store is increasing profits per month by about $380 each month. This means that from month 7 to month 12, the Indianapolis store is projected to have a 5×$380=$1,900 increase in profit from months 7 to 12. How would the projections for a Chicago store compare? An estimate of the coordinates for the Chicago function would be (7, 3.4) and (12, 7.4), which would result in an average rate of change of: m=7.4−3.412−7=45=0.8 This means the Chicago store is projected to have an $800 increase of monthly profit from month 7 to month 12. Over the course of those five months, that is a total increase of 5×$800=$4000. That is more than double what is expected for an Indianapolis location. Based on the rate of change alone, a Chicago location will greatly outperform an Indianapolis location.

Real-world meaning of Slope

Rates of change are everywhere in the real world. Think about climbing a set of stairs or rolling a wheelchair down a ramp. Both the stairs and the ramp have a "slope" or rate of change. You can describe the slope, or steepness, of the stairs and ramp by considering the horizontal and vertical changes as you move along them. But rate of change is not limited to the physical world. What about purchasing produce? Cost per pound is another rate of change you probably know well. Understanding general rates of change for linear functions is the focus of this lesson.

Comparing Average Rates of Change

Rates of change show how much something has changed over time. Graphically, the slope of the line tells you how steep a line is, which also tells you how fast or slow one quantity is increasing or decreasing in relation to another quantity. Understanding how to interpret slope will allow you to compare rates of change over different intervals of a polynomial function. A football team plays in a stadium that holds 110,000 fans. When ticket prices drop, attendance tends to rise. When ticket prices rise, attendance tends to drop. The following chart shows ticket price and the corresponding number of attendees. Ticket Price$30$40$50Approximate Tickets Sold (in thousands) 103.498.993.7 The demand function (demand for tickets) is approximated by the following polynomial graph: As the business manager for the football team, Keith is interested in exactly how a proposed change in ticket prices could affect sales. Keith uses the data in the table above to analyze and compare demand at different price points. For example, Keith wants to know how much attendance is affected when ticket prices go from $30 to $40. He also wants to know how attendance is affected when ticket prices go from $40 to $50. These are two average rates of change, which need to be compared for the two intervals above: first, [30, 40] and then [40, 50]. Next, you will see an applet that shows these two average rates of change. Which one more negatively impacted ticket sales, the increase from $30 to $40 (shown in red) or the increase from $40 to $50 (shown in blue)? Visually, you can see that the blue line (corresponding to the average rate of change over the interval [40, 50]) is steeper than the dark red line (corresponding to the average rate of change over the interval [30, 40]). The blue line is steeper, which indicates that more ticket purchases were lost going from $40 to $50 than going from $30 to $40. The average rates of change show exactly how many ticket sales were lost per dollar increase with each price change: going from $30 to $40, Keith lost about 450 ticket sales for each dollar increase whereas he lost 520 ticket sales per dollar increase going from $40 to $50. Keep in mind that the average rates of change measure the change in ticket sales per dollar. For the first average rate of change (-0.45), this means Keith should expect to lose about 450 ticket sales for each dollar increase. This gives Keith a bit more information; for instance, if he adjusted the price from $30 to $35, he should expect to lose about 5×450=2250 ticket sales from the $5 increase. On the other hand, with the second average rate of change (-0.52), Keith should expect to lose about 520 ticket sales for each dollar increase. That means going from $40 to $45 means Keith should expect to lose about 5×520=2600ticket sales from the $5 increase. Overall, this information tells Keith that more fans felt a pinch from the price going from $40 to $50 compared to when the price went from $30 to $40. This knowledge may discourage Keith from raising prices beyond $50. How might you expand this to general practices in business? When ticket prices are low, consumers are more likely to purchase tickets. When ticket prices are high, consumers may opt to watch the game on TV. A demand equation uses polynomials to figure how high or low a product can be priced without affecting demand too adversely. In 2000, Ben had $1996 in savings. In 2001, he had $2,729 in savings and in 2003, he had $3,501 in savings. Did Ben's average rate of change in savings increase more from 2000-2001 or from 2001-2003? Since the average rate of change from 2000 to 2001 is $733 per year and the average rate of change from 2001 to 2003 is $386 per year, there is a greater rate of change between 2000 and 2001.

Estimating Input-Output Pairs Continued

Rather than calculating input-output pairs, it is sometimes easier to estimate them from a graph. This allows you to bypass the calculations, though estimations from a graph are just that: estimations. Look again at these examples: S(t)=401+150e−0.7t, R(t)501+150e−0.9t, where S(t) and R(t) model the market share of Sunrise Sky Spa and Retreat Spa, in percentage, and t is the number of years since 2000. These functions are depicted in the following graph. What is the market share of each company at the beginning of 2012? To determine that, look for points on both functions with x = 12. Since R(12)≈49.5, you can interpret this to mean that Retreat Spa had about 49.5% of the city's market share at the beginning of 2012. Similarly, since S(t)passes approximately through the point (12, 38.5), this means Sunrise Sky Spa had approximately 38.5% of the market share at the beginning of 2012. In short, estimating coordinates from a graph does not change how you interpret the input-output pairs of a function. Keep in mind that because you are working with estimates when you work with graphs like this, you will have less accuracy than if you calculated the input-output pairs by hand. In this lesson, you learned how to estimate an output value using a graph, given a logistic function's input value. You also interpreted those values in several different scenarios. Here is a list of the key concepts in this lesson: In a logistic function f(x)=L1+Ce−kx, the value of k determines how fast the function grows in the mid segment. The higher the value of k, the faster the function grows. In a logistic function f(x)=L1+Ce−kx, the value of L determines the function's maximum value. In a logistic function f(x)=L1+Ce−kx, the y-values approach the asymptote y = L as the x-value becomes infinitely large, and the y-values approach the asymptote y = 0 as the x-value becomes infinitely small.

Given a real-world scenario, the graph of a polynomial function modeling the scenario, and two x-values, interpret why one x-value's rate of change is optimal based on real-world context.

Ready, set, go! You and your best friend, Halley, are running a 100-meter race. After 10 seconds, you and Halley have both run 35 meters. After 20 seconds, you have covered 60 meters while Halley has covered 65 meters. But at 30 seconds, you are tied up again and have both covered 100 meters. Which one of you had a faster average speed during the first ten seconds? What about the last ten seconds? In this lesson, you will compare average and instantaneous rates of change at specified inputs (x-values).

Using the Slope Formula to Find the Slope

Recall the example from the previous lesson on rental cost at EZ Driving Rental. Cost can also be modeled by the function R(m). It is given that R(50)=90, and R(100)=100. This implies that you would pay $90 for driving 50 miles and $100 for driving 100 miles. The following applet contains the points (50, 90) and (100, 100) and calculates the slope of R(m): slope=y2−y1x2−x1 In the formula for slope, (x1, y1) and (x2, y2) are two points on the line. Notice that subscripts are used, not superscripts (which means exponent). In this scenario, the two points are (50, 90) and (100, 100). If you substitute those numbers into the formula, you get: slope=y2−y1x2−x1=100−90100−50=1050=0.2 Compare the formula with the graph to see that y2−y1 calculates the difference in y-values of those two points, which is the rise in the graph; x2−x1 calculates the difference in x-values of those two points, which is the run in the graph. The slope formula is a way to calculate a line's slope without graphing the line. Notice that the line also crosses C (0, 80), which is its y-intercept. By the general formula of linear equations, f(x)=mx+b, the function's equation is R(m)=0.2m+80. The company charges $0.20 per mile with a flat fee of $80 up front. The amount of gasoline remaining in your gas tank, G(h), is a linear function of driving time, h, in hours. At time h = 0 (before any driving), you have 10 gallons in your tank. After driving 3 hours, you have 5.5 gallons in your tank. Find the equation of G(h). G(h) = -1.5x + 10 The slope is y2−y1x2−x1=5.5−103−0=−4.53=−1.5 and the y-intercept is 10, the starting value.

Given a data set and a proposed function to model the data, identify if any outliers impact the fit of the model to the data.

Remember Jessica, the real estate agent who was collecting data on rent and square footage for a client who wanted to rent her 2,000-square foot house? The scatterplot Jessica made showed a generally straight line, but the line did not run directly through all the data points when she tried a linear regression. In real life, you rarely see a data set which matches perfectly with a certain type of function. If a data point is distinctly separate from the rest of the data, it is a possible outlier. In this lesson, you will see how outliers can affect attempts to find the best-fitting function for a given set of data.

Interpreting an Instantaneous Rate of Change

Remember Mappit and its new website? On launch day there were 10,200 visitors to the site. Each day after launch, the number increased by 10%. The number of site visitors y after x days can be modeled by the function y=10,200×1.1x. You looked previously at the average rate of change for this situation over a given number of days. Now, what about the instantaneous rate of change? To begin with, focus on understanding what these numbers mean rather than calculating them. On day 3, the instantaneous rate of change is about 1,294, which you can verify in the following applet. But what does 1,294 mean? The unit here is visitors per day, since the dependent variable measures "visitors" while the independent variable measures "days." So this instantaneous rate of change means that at midnight on the third day, the number of visitors is growing by 1,294 visitor per day at that instant. Recall that the instantaneous rate of change measures the rate of change at one specific time. In this case, x = 3 corresponds to the when the third day starts, which is midnight. If you wanted to know how things were changing around noon on day 3, that would be halfway through the third day, so you would use x = 3.5. The instantaneous rate of change at this point is about 1,357 new visitors per day at this instant. As x gets larger, what happens to the instantaneous rate of change of the function y=10,200×1.1x? The instantaneous rate of change is increasing as x gets larger. Another way of saying this is that the slope of the line through the point corresponding to the instant gets larger and larger. After a sudden heavy rainfall at 8:00 a.m., Rock City's storm drain #34 is flooded. The civil engineer responsible for ensuring good drainage has to calculate how fast the water will flow away through drain #34. The function f(x)models the volume of water in liters, x minutes after 8:00 a.m., and the instantaneous rate of change when x = 4 is 3. What does this rate of change mean in terms of the drainage of the water? It means that at 8:04 a.m., the instantaneous rate of change of the water in the storm drain is 3 liters per minute.The rate of change is given for the time 8:04 a.m., which is an instant in time.

Comparing Instantaneous Rates of Change cont

Remember that a graph can be used to determine which scenario is going to be better in the long term, but using rates of change can help quantify exactly how much the difference is between two scenarios. Consider this next example: Two new games are preparing to launch on the app store: Peanut Shoot and Zombalooza. Nick, an investor in gaming apps, is trying to figure out which gaming app to invest in. He has requested some information from the game developers, like projected number of users for each game. Based on initial user feedback, the projections for the number of players is shown in the graph below. Instantaneous rates of change are also given for both projections. Based on these projections, which game seems like it will be more successful at the 100-day mark? At the 200-day mark? (Note: In the following applet, Peanut Shoot is the black function while Zombalooza is the red function; associated instantaneous rates of change are in similar colors and denoted with dotted lines.) It seems that the projections have Zombalooza outperforming Peanut Shoot. At the 100-day mark, there is not much of a difference between the two apps, with Zombalooza having a projected 113 users and Peanut Shoot having about 112. However, Zombalooza is growing at a rate of about 3 new users per day at the 100-day mark, while Peanut Shoot is only growing at a rate of about 2 new users per day. By day 200, the differences are starker, with Zombalooza projected to have about 613 users compared to 421 Peanut Shoot users; Zombalooza is still projected to grow faster at the 200-day mark with 7 new users projected each day compared to about 4 users per day for Peanut Shoot. Based on the analysis above, you might expect Nick to invest in Zombalooza. However, Nick also inquires about the price-point for each game. He finds out that Zombalooza will cost $2 to purchase when it is released while Peanut Shoot will cost $5. The previous analysis could be turned on its head with this new information and perspective on the problem. Below is a table that summarizes the previous data plus the projected amount of sales for each game. Game CostProjected Users on Day 100Projected Users on Day 200Day 100 Total Projected SalesDay 200 Total Projected SalesZombalooza$2113613$226$1226Peanut Shoot $5112421$560 $2105 Nick can also use the instantaneous rates of change to determine projected new sales per day. That information would look like this: Game CostProjected New Users per Day on Day 100Projected New Users per Day on Day 200Day 100 Projected New SalesDay 200 Projected New SalesZombalooza$23 new users per day 7 new users per day $6 per day $14 per day Peanut Shoot $52 new users per day 4 new users per day $10 per day $20 per day Based on this new information, Nick opts to invest in Peanut Shoot. Keep in mind that similar data could be observed for Day 300, but all projections have their limitations. With any projection, the further things are pushed into the future, the more chance for error with data or conclusions. That is not to say that the model would necessarily be wrong, but the results are much less certain. You are on vacation in Colorado and planning an all-day hike. When planning what to wear, should you focus on the short-term rate of change (the forecasted rate of change in temperature for the day) or the long-term rate of change in temperature (like the average temperature in Colorado over the last five years), and why? In this situation, it would be better to use the short-term weather forecast, as that will more closely model your environment. The average temperature over the last five years could be very different than the temperature on this particular day. Suzie is trying to decide between two universities. To help her make her decision, she is reviewing the employment rates for graduates of the two universities. Based on the instantaneous rate of change today (at day 100), should Suzie go to University A (shown in black, originating at point P) or University B (shown in blue, originating at point Q)? University A, because the instantaneous rate of change at day 100 indicates more and more graduates from this university are being employed. Correct! At day 100, University A's instantaneous rate of change is 0.05, meaning that tomorrow (in one more day), University A's graduate employment rate will likely go up 0.05%.

Extrapolating with Care: Extreme Extrapolation Values

Return to the World Bank data on life expectancy. The dates here extend from 1960 to 2015. Extrapolating the life expectancy of a person born in 1918 from this graph by extending the line of best fit, or from the equation of the line of best fit, produces an estimate of 62.5 years. y=0.1729×(x−1960)+69.73 y=0.1729×(1918−1960)+69.73 y=62.5 years Does this number, 62.5, mean that most people born in 1918 lived to be 62 and a half years old? You may know that a terrible disease, called Spanish flu, was rampant between 1918 and 1920. Also, the Great Depression began in 1929. As a final note, boys born in 1918 would have been the right age to serve in the army during World War II. All these events had a large impact on the life expectancy of people born in 1918, meaning that someone born in 1918 might have a much lower life expectancy than people born in other years, even just five or ten years before or after 1918. When making a prediction, the basic assumption is that the past looked like the current situation. The same is true for the future—that it will strongly resemble the present. For babies born in 1918, that was not the case. If the Spanish flu, the Great Depression, and World War II had not happened, the prediction that, on average, someone born in 1918 would have lived to be 62.5 years old would be valid. The following graph displays the actual data for life expectancy for babies born in the United States, starting in 1900. [The graph shows a line that passes through 46 in 1900, slopes up and down in the interval of 45 to 55 from 1904 to 1920 to reach its minimum at 39 in 1920, from where it steeps up to reach 61 in 1924, again it slopes up and down in the interval of 55 to 65 from 1924 to 1944, where it slightly slides up throughout the years to reach 79 in 2004. ]© 2018 WGU As you can see from the big dip for 1918, the actual life expectancy was only 40 years for those born that year. The prediction of 62.5 years was so far off because as the original line, based on data from 1960 through 2015, extended beyond the known data, predictions become less accurate. Unfortunately, those born in 1918 were affected by sickness, wars, economic depression, and other issues. Fortunately, society stabilized, and life expectancy steadily increased starting about 1944. In general, there are no agreed-upon standards on the limits for computing an extrapolation. However, it is possible to put some boundaries on extrapolating. To do that, first understand range. The range is the distance between the x-value of the smallest data point, or xmin, and the x-value of the largest data point, or xmax. Said more mathematically, range=xmax−xmin. Another way to think of it: The range is basically the width of data in the x-direction. The range helps determine exactly how far out it is possible to go with extrapolations. With a moderate or strong model, it is reasonable to go out 25% of the range of the data for an accurate extrapolation. With a strong model, it is possible to go out 50% of the range, but you should expect a somewhat risky extrapolation. Anything beyond 50% of the range is considered an extreme extrapolation value and should be avoided. For the A&B Car Sales example, there was data between the values of t = 5 and t = 15 and an r2-value of 0.75, indicating a strong model. The range is 15−5=10, and that allows for doing either an accurate or a risky extrapolation. For a more-accurate extrapolation, you would only go out 25% of the range, or 10×0.25=2.5 years. You could extend from both the highest and lowest data values, xmax and xmin, respectively, meaning you could go up, or into the future, as far as xmax+(0.25×range)=15+2.5=17.5, which would be a maximum value of 17.5 years, or as far down, or into the past, as 2.5xmin−(0.25×range)=5−2.5=2.5, or 2.5 years. For a riskier extrapolation, you can go out as far as 50% of the range. This would mean you could go up, or into the future, as far as 5 years or as far down, or into the past, as far as 5 years. Is it ever appropriate to go further out than 50% of the range? Anything past 50% of the range is referred to as an extreme extrapolation. These are extrapolation values that only a regression professional should attempt. The following table summarizes these values and a corresponding graph follows so that you can view them in each format. SectionLow ValueHigh ValueNotesInterpolation(green or widest middle section in the graph)xminxmaxSafe section for any moderate or strong modelMore accurate extrapolation (yellow sections—or narrow sections on either side of the green center section—in the graph) xmin−(0.25×range)xmax+(0.25×range)Safe section for any moderate or strong modelRisky extrapolation (orange sections—or narrow sections one section in from outer sections—in the graph) xmin−(0.5×range)xmax+(0.5×range)Somewhat safe section for any strong modelExtreme extrapolation (red sections—or the two most outer side sections—in the graph) There is no lowest value for extreme extrapolationsThere is no highest value for extreme extrapolationsShould only be done by a regression professional [The graph shows a line that passes through about (4, 20) and (18, 250). Data points closely follow the line in a wave-like pattern. The area around the points, from 5 to 15, is labeled Interpolation and the remaining area is labeled Extrapolation. ]© 2018 WGU, Powered by GeoGebra For the purposes of this course, any value outside the 50% threshold is considered an extreme extrapolation value and should not be considered. Professionals often do interpret extreme extrapolation values with great accuracy, but they can do so only after years of training. In short, an extreme extrapolation is not necessarily wrong, it just should have been computed by a regression professional. This kind of professional modeling is common in healthcare, economics, the physical sciences, and engineering.

Graph and Inverse Functions

Rick's computer store buys old computers to refurbish and resell them. Rick calculates a computer's value, including depreciation, by using the following function: V(m)=300-5m, where V is the value of a computer and m is the number of months since it was purchased. The graph below shows a computer's depreciation based on the given function. Note that the correct way to write "the inverse of a function" is to write a superscript "-1" after the variable that indicates "function." It looks like V−1. Even though the superscript looks like an exponent, it is not one. In other words, although x−1=1x, V-1 does not mean 1V because V is a function, not a variable. It all depends on the context. The function V models a computer's value after m months. Its inverse function would model the number of months since the purchase based on the computer's depreciated value. The function yields a number of input-output pairs, which show exactly what input produces exactly what output. For example, the point (60, 0) is on V, implying the computer is worth nothing after 60 months. The point (0, 60) must be on V-1, implying 60 months have passed since the purchase if a computer is worth nothing today. Similarly, since points (0, 300), (10, 250) and (20, 200) are on V, the points (300, 0), (250, 10), and (200, 20) must be on V-1.

Maximizing Returns on Stocks Using Average Rates of Change

Ron wants to see the trends in his stock portfolio performance over a year. He starts by comparing the average rates of change between months 1 and 2 and between months 1 and 4. Look at the graph of his portfolio's values so you can compare these two rates of change. What did you notice? Between months 1 and 2, the average rate of change was $12,000 per month, a substantial increase. However, between months 1 and 4, the average rate of change was −$841.34, a loss. What does this mean? Investing in the stock market is risky in the short term but usually, in the long term, an investor hopes for a large gain. If Ron were to look only at his portfolio's average rate of change for the first month, he would think he was doing extremely well. Similarly, if he were to look only at his investment between months 1 through 4, he would think his investment was doing poorly; the portfolio lost an average rate of $841.34 per month over those four months. If an average rate of change is positive, the corresponding line increases, or rises; if an average rate of change is negative, the corresponding line decreases, or falls. Next, expand your view of Ron's portfolio. Compare the average rate of change between months 1 and 2 and the average rate of change between months 1 and 12. The average rate of change from months 1 to 2 was $12,000, while the average rate of change from the month 1 to month 12 was $3,636. That is a big difference, and you can see it on the graph. Compare the slope of those two lines. Since the average rate of change between months 1 and 2 is larger, the corresponding line is steeper than the line representing the average rate of change over the whole 12 months. Now you know that you can visually compare two average rates of change by comparing the steepness of their lines. Even if both lines have positive slopes, the steeper line represents a greater rate of increase overall. Similarly, if both lines have negative slopes, the steeper line represents a greater rate of decrease overall. Using these average rates of change, Ron may start to think about doing more short-term investments. For example, since he saw the greatest rate of return in month 1 to month 2, he might start changing his investments when he has large return months like this. Essentially, the average rate of change helps him compare how his investments are earning money over time.

Online Gamers

Sarah decided to look at the number of online gamers for those two games yesterday over time. She finds that there are different patterns for the number of online gamers for the two games. First, define G(t) to be the number of Glory of Lords gamers, in thousands, where t is the number of hours since 0:00 (midnight) yesterday. Similarly, define Z(t) to be the number of Zoo Alive gamers. From the data in the following graph, would Glory of Lords or Zoo Alive be the optimal game for the new server? Answer that question by identifying the game that has more online users yesterday. Overall, it looks like Glory of Lords has more users for a little over 11 hours of the day, the first 10.7 hours of the day plus the last 0.5 hours of the day. Zoo Alive has more users from approximately t = 10.8 to approximately t = 23.5, which represents from about 10:48 a.m. to 11:30 p.m. That is a little under 12 hours of the day. This seems like a tight call, but Zoo Alive seems to be the optimal candidate for the new server based on this data. Since it was so close, though, Sarah may consider other data before making a final decision. Consider a different question now: When did those two games have the same number of online users? This would occur when the two functions' graphs intersect. Estimate these intersection points in this graph. You should have estimated the coordinates to be located roughly at (10.8, 30) and (23.5, 41.5), which imply that there were 30,000 gamers around 10:48 a.m. for both games, and there were 41,500 gamers at 11:30 p.m. for both games. You may recognize these as the times mentioned in the first analysis. Just keep in mind that different questions can lead you to similar values; moreover, different questions can also be answered using similar methods of analysis. Lesson Summary In this lesson, you learned to analyze a graph displaying two functions to find an optimal solution in various contexts. Here is a list of the key concepts in this lesson: Finding an optimal solution based on a graph depends on the criteria on which the problem is based. When trying to optimize scenarios, interpret the independent and dependent variables in context and find the corresponding input-output values.

Given two graphs of data for two real-world situations, identify the optimal situation based on the real-world situation and the input-output pairs.

Sarah manages web servers at Gamer Central. Two web servers host two online games, Glory of Lords and Zoo Alive. Sarah has a new server she would like to put one of these two games on. However, Sarah wants to make sure that whichever game is on the new server optimizes the user experience; that is, the game with more users at any one time should be on the new server. In this lesson, you will find optimal solutions to real-world problems based on specific criteria and also interpret independent and dependent variables in context to find the corresponding input-output values.

Given a real-world scenario and a corresponding logistic function or its graph, interpret the average rate of change at two specified values in context.

Say you walked 10 feet in 5 seconds. Your speed might not have been constant, and you might have even stopped for a second or two. Either way you look at it, you covered 2 feet per second on average. Another way of saying this more formally is that your average speed was change in distancechange in time=10 ft5s=2fts. In general, average rate of change=change in y−valuechange in x−value. In this lesson, you will get more practice with calculating and interpreting average rates of change at two specified values.

Given a scatterplot of real-world data, visually identify the occurrence of outliers.

Scatterplots are valuable tools for visualizing data. In particular, scatterplots often reveal outliers that can skew results if not handled correctly. In real life, you often need to analyze data based on a scatterplot. Sarah has collected online gamer data from her web servers and plotted it on a scatterplot to see if there were any possible outliers. Now she needs to use this data to make business predictions. In this lesson, you will help Sarah with that task, focusing in particular on what to do with possible outliers.

Membership Changes

See if you can estimate solutions based on the graph coming up that visualizes that up-and-down history of Sunrise Sky Spa. Even though you do not know the equation of the function in the graph or even what type of function it might be, you can still estimate solutions for equations related to the graph. Treat the following graph as a function of the number of memberships in thousands, M(t), where t is the number of years since 2000. For instance, to find the maximum of this function over the data shown, identify the highest value, which appears to occur approximately at the y-value 21.9. The associated equation to solve would then be M(t)=21.9. You may have already found the associated x-value for this coordinate. Remember, when you are given a y-value, such as the maximum, finding the associated x-value is considered "solving the equation." In this case, the coordinate can be estimated as (7, 21.9), which can be written in function notation as M(7)=21.9. This implies that the maximum you are looking for occurs when t = 7. To interpret this solution in context, this means that Sunrise Sky Spa achieved its maximum number of memberships in 2007, and that the number of memberships at the maximum was about 21,900 memberships. Perhaps management suspects that there was something about reaching 20,000 memberships that started deterring new customers. Perhaps there was feedback that the spa locations were too crowded once 20,000 members were reached. Management might want to know exactly when 20,000 members were reached to try to research that snapshot in Sunrise Sky Spa's history. To answer this question, you need to solve the equation M(t)=20. There would be no way to solve such equations by algebra, since you do not know what the actual function is in this case. However, you can still estimate the solutions by looking at the graph. First, notice that there are two places where the function's y-value, or output, was 20. The input-output pairs for those instances are (5.8, 20) and (8, 20). This means the equation M(t)=20 has two solutions: t≈5.8 and t≈8. In other words, when the function's input is 5.8 or 8, the output is 20. In function notation, you could write these solutions as M(5.8)=20 and M(8)=20. To interpret these solutions, just keep the units of the independent and dependent variables in mind. The bottom line is that near the end of 2005 and again at the start of 2008, Sunrise Sky Spa had about 20,000 members. Keep in mind that the x-values in (5.8, 20) and (8, 20) are just estimations. It would be perfectly fine to estimate them to be (5.78, 20) and (7.99, 20) and say the equation's solutions are t≈5.78 and t≈7.99. Of course, you would expect some degree of error when estimating solutions by a graph. You should always try to estimate to at least one decimal value beyond the grid marks given in a graph. For example, since the grid marks in this graph are given in units of one year, you should be able to estimate to one decimal value beyond that (or to a tenth of a year, or 0.1). Lesson Summary In this lesson, you learned to interpret solutions to equations in different contexts. This is an important skill because each equation and each solution is in the context of real life. For example, the solutions t≈5.8 and t≈8 implied that Sunrise Sky Spa had 20,000 memberships in late 2005, and then again at the beginning of 2008. Here is a list of the key concepts in this lesson: You can estimate solutions to equations by using what you see on graphs. Solutions can be viewed as either input-output pairs or as coordinates on the graph; both approaches are valid and equivalent. To translate the solution to an equation into real-world meaning, keep the independent and dependent variable units in mind. Also, remember that coordinate pairs usually list the independent variable first and the dependent variable second.

Given two scatterplots of real-world data (one with outliers, one without outliers), the two associated polynomial regression functions, and the associated coefficients of determination, identify the more appropriate regression function for the data.

Sergio, a business analyst at Youth Again mall management company, is reviewing the trends of shoppers. He notices that one of the data points is very far away from the general trend of the data. When you collect data, it is normal to collect some data which does not fit the general trend in the data set. These points are possible outliers and you will investigate what makes them break away from the general trend of the data. In this lesson, you will first learn what to look for in possible outliers and what makes them true outliers. Second, you will learn how true outliers affect the regression function, its coefficient of determination, and its graph. Finally, you will learn how to handle true outliers to make your conclusions fit real-world circumstances.

How Lines Work: Inputs and Outputs

Seth has started driving for a trip from home. His speedometer tells him he is traveling at 65 miles per hour. If he travels at this speed for three hours, how far will he travel? You can calculate this as 3×65=195 miles. This also means Seth's trip odometer will show he has traveled 195 miles at three hours into the trip. Here is another way of looking at this: In the following graph, the distance that Seth has traveled is given by the function d(t)=65t, where t, the input variable, is the time he has been traveling. Then d(t), the output variable, is the distance the car has traveled in t hours. Suppose Seth is on another, longer trip now. For the first leg of the trip, Seth drove 250 miles before pulling off for some food. After eating, he resumes his trip. If Seth resumes driving at 65 miles per hour, how can you model the distance he travels in the second leg of his trip? For the second leg of the trip, the distance Seth traveled at time t = 0 was 250 miles. Use the function d(t)=65t+250. Since d(0)=65×0+250=0+250=250, you know that at the time he started driving again, or at t = 0, Seth had gone 250 miles. Then, to find the distance three hours into the second leg of the trip, substitute 3 for t (since t = 3 in this situation) and find d(3)=65×3+250=195+250=445 miles. In the last examples, you have seen functions of two forms: f(x)=mx and f(x)=mx+b. You can consider these to be the same type of function for both of these examples; the first version is just the second when b = 0. To do more complex input/output problems, it is vital to use the order of operations: parentheses, exponents, multiplication and division (from left to right), and addition and subtraction (from left to right). Please Excuse My Dear Aunt Sally You may also use the phrase "Please Excuse My Dear Aunt Sally" as a way to remember the order of operations. The functions of the form f(x) = mx that you looked at previously had only one operation to perform: multiplication. These new functions of the form f(x) = mx + b have two operations, however: addition and multiplication. You just have to remember that multiplication comes before addition. If you forget that, you will get a very different, incorrect answer. For instance, in the driving example, if you add first, a mistake, you get d(3)=65(3+250)=65×253=16445 miles! Suddenly, it looks like you drove around the world in just three hours! The good news: for linear functions, with or without b, multiplication, division, addition, and subtraction will be your primary operations. Later on, you will deal with parentheses and exponents. A chorus rents tuxedos for its singers at $85.74 per tuxedo with a $25 processing fee, so the cost function is C(x)=85.74x+25. How much does it cost the chorus to rent tuxedos for 14 singers? C(85.74(14)+25=1225.36. The distance from Boston, MA, to Albany, NY, along Interstate 90 is 169.5 miles, and the maximum speed in both Massachusetts and New York is 65 miles per hour. You left Boston 90 minutes ago, driving at the speed limit. If the function that gives the remaining distance is d(x)=169.5-65x, where x is the number of hours you have already traveled, how much farther is it to Albany?

Identifying Periods of Rapid Improvement

Since rates of change measure how variables change, you can compare rates of change to see how different periods of time compare to one another. This provides you with a way to compare periods of growth or improvement to identify an optimal one. Consider this scenario: TaeComm Company produces random access memory (RAM) for personal computers (PCs). The time it takes a new TaeComm RAM to transfer 1 GB of data can be modeled by the function s(t)=689.431+0.2e0.26t+103.54, where s(t) is time in microseconds, and t is the number of years since 2000. To understand the basics of this function and the following graph depicting it, first compare average rates of change for two time periods: from point A to point B, and then from point C to point D. How would you interpret the average rate of change from A to B, and the average rate of change from C to D? Begin by imagining a line through points A and B, and another line through C and D. Since the line from A to B is steeper than the line from C to D, the function decreased faster from A to B than it did from C to D. This implies that the time it takes to transfer data decreased more quickly from 2002 to 2004 than it did from 2014 to 2016. Which of these situations is ideal from the perspective of TaeComm's management? Management would see the average rate of change from 2002 to 2004 as the ideal situation, since the company improved RAM speeds more during this time. However, keep in mind that the values in 2014 and 2016 are approaching 0 microseconds. This means that improvements have to slow down, since speeds were already very fast by that time. Now compare instantaneous rate of change at the points A and D in the following graph. Which has the optimal instantaneous rate of change, and how can you interpret this value? Imagine a line touching the curve at point A, and a line touching the curve at point D. The line touching point A will be steeper, implying that the function is decreasing faster at the beginning of 2002 than it is at the beginning of 2016. How do you interpret those two lines in terms of what is best for TaeComm and the company's management? Which instantaneous rate of change would management see as optimal: the one from the beginning of 2002, or the one from the beginning of 2016? TaeComm's management would rather see the instantaneous rate of change at the beginning of 2002 because the trend at that time showed faster decrease in time to transferring 1 GB of data, meaning a faster increase in RAM speed. That said, do not forget that with speeds for transferring data approaching 0 microseconds, there is not a lot of room for greater speed in this situation. The following is a graph of these instantaneous rates of change for comparison. Lesson Summary In this lesson, you looked at a scenario, compared rate of change in context, and chose an optimal solution. You compared average and instantaneous rates of change, you interpreted the meaning of these rates of change, and ultimately you identified the optimal solution from the available choices. Here is a list of the key concepts in this lesson: Comparing two average rates of change allows you to identify an optimal solution over a period of time or over an interval. Comparing two instantaneous rates of change allows you to identify an optimal solution from the choice of two instants. You can estimate which is an optimal solution by visually comparing average rates of change on a graph or by comparing instantaneous rates of change on a graph.

Given the graph of a real-world scenario, interpret what concave up or down means in context.

Since you have probably driven a car before, you know that you can speed up steadily or very quickly. For example, when you push the gas pedal hard, your car covers more ground more quickly. This idea of covering more ground more quickly leads us to a concept called concavity. Concavity is a concept and a tool connected with the idea of rates of change. Concavity shows the "shape" of change, so it is most easily understood in visual form, on a graph. These shapes convey two main things—the direction of a change and the speed of a change. In this lesson, you will learn more about what concave up and concave down mean in the context of specific situations. You will also learn how concavity can reflect "turn-arounds" that can mean good news or bad news for people or companies.

Finding Horizontal Asymptotes with Graphs

So far, you have looked at several situations with graphs that tended toward zero. That said, not every asymptote tends toward zero. For example, here is a graph of the number of viewers (in millions) for a TV show called Viking Wars, measured in weeks. Notice that the pilot episode of Viking Wars (when t = 0) had 15 million viewers. As time went on, however, some people started dropping off and not watching the show anymore. As weeks go on, it seemed that the show was settling toward about 5 million viewers per episode. Visually, you can see that the y-values get closer to y = 5 as the x-values get bigger. Another show, Game of Crowns, attracted 10 million viewers for its pilot episode. From there, the fan base continued to grow, episode after episode. This function has an asymptote at y = 10, which you can see since the y-values toward the negative x-direction all tend toward the y-value 10. In this context, looking backward toward "negative episodes" does not make sense, so you would not necessarily interpret the negative x-direction here. Just be sure that you can spot asymptotes in logical situations. When looking for an asymptote graphically, see if the function's values get closer to a horizontal line as x either increases or decreases. If there is an asymptote, the graph of the function will not move away from that horizontal line. The function's graph appears almost to merge with the line, either as x gets big (to the right) or small (to the left). Here are versions of the Viking Wars' and Game of Crowns' graphs so you can see what this looks like with the horizontal lines drawn in. Does the function shown in the graph have an asymptote? Explain your answer. Yes, the function has an asymptote. The x-values in the negative direction show the function tending toward y-values of 15. The function also starts to look more and more like the horizontal line y = 15. No, this function does not have an asymptote. There are no specific y-values that the function approaches as x gets more and more positive or negative. Instead, it looks like the function values may decrease forever on the left and the trend on the right is not tending towards a specific value - this means there are no asymptotes here. Lesson Summary In this lesson, you learned what an asymptote is and how to find one for an exponential function. You also learned that natural limits, or asymptotes, occur in many business, IT, and real-world scenarios. Here is a list of the key concepts in this lesson: Asymptotes occur when the y-values of a function tend toward a specific value as the x-values get large or small. Of the functions presented so far (linear, polynomial, and exponential functions), only exponential functions have asymptotes. There is always one asymptote in an exponential growth or decay problem. Asymptotes can be identified using reason. To do that, see if the response variable would naturally "level off" as its x-values get more positive or more negative. Asymptotes can also be identified using graphs. To do that, look for a horizontal line that the function tends toward as the x-values get more positive or more negative.

Comparing Average Rates of Change Continued

So far, you have seen how to calculate and interpret average rates of change. Now you will look at comparing situations with average rates of change to identify optimal situations. Consider the following example: Carbon fibers are lightweight, strong, and highly conductive; they are used in many industries, including automotive manufacturing, aerospace engineering, and sports equipment. The cost of producing carbon fiber increases as the length of the fiber increases. The exponential function c(x)=15×1.15x models the cost in dollars when x is the length of the fiber in inches. The function c(x)=15×1.15x tells you that the longer the carbon fiber in inches, the more expensive it is. What is the average rate of change for production costs as the piece changes from one length to another? Previously, you calculated the average rate of change for given functions. One thing you learned in calculating the average rate of change is that the average rate of change from a to b is the same as the slope of the line through the two points on the graph corresponding to x = a and x = b. This fact makes comparing average rates of change much simpler when using a graph. Say you need to compare the average rate of change for the cost of production from 5 inches to 10 inches and from 5 inches to 15 inches. Before you start calculating, examine the following graph. Referring to the graph, you can see that one of these lines is steeper than the other. The line going from point A (when x= 5) to point C (when x = 15) is steeper. This immediately tells you that the average rate of change for increasing the length of the fiber from 5 inches to 15 inches is more than the average rate of change for increasing the length from 5 inches to 10 inches. How do you calculate exact average rates of change like these? You can find the coordinates associated with points A, B, and C, and then use the slope formula, or you can use the following applet. If you still need some practice calculating average rates of change, use the applet to make sure you are doing your calculations correctly. Referring to the graph, you can see that one of these lines is steeper than the other. The line going from point A (when x= 5) to point C (when x = 15) is steeper. This immediately tells you that the average rate of change for increasing the length of the fiber from 5 inches to 15 inches is more than the average rate of change for increasing the length from 5 inches to 10 inches. How do you calculate exact average rates of change like these? You can find the coordinates associated with points A, B, and C, and then use the slope formula, or you can use the following applet. If you still need some practice calculating average rates of change, use the applet to make sure you are doing your calculations correctly.

Interpolation and Extrapolation Values

Some might say all you need to do is make an educated guess. You could guess that you are going to live to be 105, but would that be a good prediction or just wishful thinking? If there were data and facts to back your prediction up, the idea of living to see 105 could become a true prediction. In fact, that is exactly what a prediction is: a guess supported by data and facts. For example, the World Bank has accumulated a lot of information, including census data, supporting its predictions for the life expectancy of United States-born citizens. According to this data, the life expectancy of a person born in the United States in 1980 is estimated to be 72.8 years. You could estimate the life expectancy of someone born in 1940 to be less than the estimate of 69.77 for someone born in 1960 just by looking at the slightly decreasing trend in the data at the far left of the graph. When a line or a curve on a graph continues to move in one direction, like this line does, it is a called a general trend. Keep in mind that a general trend is actually a prediction; that is, a general trend is a guess supported by data and facts. Even with a reliable general trend, the ability to predict accurately further out in time, in the past or the future, is reduced as the data points are left behind. In this case, life-expectancy predictions for Americans born in 1900, 1860, 2050, or 2075 would become less and less accurate as one moved away from the actual data points. Keep in mind that there is no standard rule for how far in the future predictions may be valid. When working with data and a model to make a prediction, you will usually find yourself in one of two situations: trying to make a prediction for a situation or time for which you have data or trying to make a prediction for a situation or time for which you do not have data. The first case, making a prediction for a situation or time for which you have data, is called an interpolation. The second case, making a prediction for a situation or time for which you have no data, is called an extrapolation. In general, regression models are much more reliable and accurate forinterpolation values than they are forextrapolation values. That means that you need to be more careful when looking at extrapolation values. For the life expectancy data, the interpolation area is the interval between 1960 and 2015. This is the interpolation area, because there is data on life expectancy for these years. The extrapolation area is anywhere outside of these years; that is, before 1960 and after 2015. In the World Bank data about life expectancy, you can see a general linear relationship visually in the data, but it is not necessarily appropriate to use a linear regression function here. Why not? The reason is that for extrapolation values in the future, or as x increases, life expectancy, y, also increases constantly and indefinitely. While it certainly makes sense that life expectancy goes up over time, life expectancy will not continue to increase constantly in a straight line forever. That is why a linear model does not make sense. However, if the goal were more limited, such as predicting life expectancy values for people born between 2015 and 2025, that would be a different matter. Those would be extrapolation values and therefore a bit risky, but not as risky as predicting for 2075 because the extrapolation for 2025 is not as far from the known data points as for 2075. Using a linear regression for interpolation values, however, is very safe because those values are inserted in the area where known data already exists. Based on the life expectancy data, the equation of the line of best fit with the independent variable x (year of birth) that could predict y, the dependent variable (life expectancy) is calculated to be y=0.1729×(x−1960)+69.73. The r-value and r2-value for this equation were 0.99 and 0.98, respectively, so this model fits the data well. The phrase "fits the data well" is important. It means that the data for the interpolated points fits this linear model extremely well, almost perfectly. However, remember that this linear model would almost certainly not fit some extrapolation values well if they were far into the future or the past. In general, if a regression model has a strong or even moderater2-value (0.30), then it can be used with confidence for any interpolation values. Never use a weak model (r2≤0.30) to predict any interpolation values, however. For extrapolation values, the situation is not as clear-cut. Some regression models fit extrapolation values better than other regression models. For instance, a linear regression usually fits better than a logistic regression. All in all, extrapolating with different regression models can sometimes be difficult to judge.

Solutions to Linear Equations

Some positions have compensation based on sales. For these positions, you make a percentage of whatever you sell. This can mean your paycheck varies quite a bit. If you work in a position like this, you might have a bare minimum salary you have to make to maintain all your expenses; in that situation, it could be very helpful to know exactly what the minimum sales are you have to make each paycheck to "break even." This would actually be a situation where you would need to solve a linear equation and translate that solution into real-world meaning. In this lesson, you will interpret solutions to linear equations. You will also use graphs to find solutions to problems, noting the spacing of the graph's grid, and write those solutions in function notation.

Example of Predictable Maximum

Something with a predictable maximum is the rate at which a company generates revenue in dollars per month, and that may be a very helpful thing to know. The point is that with logistic models, always think about what you are modeling and make sure the variables truly have limited maximums.

World Population

Sometimes real-world data can be messy. When looking at world population data, for example, sometimes population counts drop unexpectedly, such as during the Black Plague in the fourteenth century. Thankfully, such drastic impacts to the world population have been less severe in more modern times, but there are still effects to consider when trying to model the world population. Consider the following example. World population data are provided in the following table: Year Since 1800 World Population in Billions 4112721593174418751996211 7 The following applet models data in the table by an exponential function, g(x)=0.82×e0.01x. Gesturing at point (4, 1), Gloria explained to her team that it is an outlier, a data point that is distinctly different from the rest of the data. She gave two reasons: Since the 1900s, advancements in agriculture and medicine have triggered exponential growth in human population. If the point (4, 1) is included in the regression, the regression function would underestimate world population. Without the point (4, 1), the rest of the data points clearly point to a faster growth trend compared to the regression function. Team members were convinced. They decided not to include the data point for 1804, and the updated data table is the following set: Years Since 1900World Population in Billions2725937448759961117 The following applet models data in the new table. The new regression function is f(x)=1.29×e0.02x. Compared to g(x)=0.82×e0.01xfrom the last applet, you can see: There are no more obvious outliers. The function f(x) and the data seem to share the same trend. The coefficient of determination improved from from 0.9 to 0.9962. Gaps between data points and the function became smaller. When doing data regression, you need to visually spot possible outliers. When you spot possible outliers, you either (1) verify that the data are accurate and occurred for understandable reasons, or (2) remove them from the data used for regression because the data are not accurate for reasons outside the scope of the data, like Gloria's team did. Lesson Summary In this lesson you learned that an outlier affects a regression function's equation and graph and it also generally decreases the coefficient of determination. Ultimately, including an outlier also increases or decreases the predicted values. If there is a good reason, such as a very rare event, remove outliers from a data set to improve the accuracy of predicted values. Here is a list of the key concepts in this lesson: Scatterplots with outliers can produce skewed results. In general, outliers decrease the coefficient of determination and change the equation and graph of the regression function. If an outlier is visually spotted in the data or is known of ahead of time, it is generally a good practice to remove the outlier from the data set to improve the fit of the regression function. If you are given the choice between a scatterplot of real-world data with outliers versus one without, you should generally favor the one without outliers.

Revenue Growth

Sometimes there are good reasons to leave data in a data set, even when it seems like it could be an outlier. Milton ran across such a situation as he was researching Apex Online, a respected internet retailer. Milton has been researching the revenue of Apex Online starting in 1990 to identify some of Apex Online's successful business practices. The following scatterplot shows the revenue of Apex Online since 1990. Milton wonders if there are two possible outliers here, points D and J. Point D is not an obvious outlier, however, Milton noticed that the revenue was increasing in the months before and after 1993, so he wants to investigate this point as a possible outlier. After researching these two data points, Milton learned that Apex Online recalled a major product in 1992, leading to lower sales in 1993, corresponding to point D. On the other hand, Apex Online released a few new, and very popular, products in 1998, leading to higher than normal revenue the following year. As it turns out, these data points are not outliers at all—they are higher and lower than the general trend of the data for reasons consistent with the data itself. That is, there are no external influences. Regarding Milton's question on successful business practices, he needs to dig deeper to see what Apex Online did in 1998 that helped it to be so successful the following year. Lesson Summary In this lesson, you looked at some scatterplots and decided whether to remove possible outliers. You learned that outliers are natural occurrences. Instead of simply removing outliers from a data set, you should always investigate why outliers exist. Sometimes, possible outliers tell you valuable information, like mistakes that were made. Here is a list of the key concepts in this lesson: When you identify a data point that lies outside the general trend of the data, it is a possible outlier that requires research to see why it falls outside the general trend. True outliers are points that lie outside the general trend of the data due to external influences. Sometimes outliers happen for good reasons, such as unexpected data or information emerging from the situation itself (that is, no external influences) or errors in data collection. If you determine that a possible outlier exists for reasons consistent with the data itself (that is, no external influences), then keep the data point in your data set.

Analyzing Ever-Changing Markets

Sometimes things just never stay the same. This is especially true in real estate. Sometimes you can learn a lot about a real estate market by seeing how things change in the long term. Consider this real estate example: You are going to purchase a house you plan to keep for at least 40 years. The value of the house at the time you purchased it was $101,000. Examine these two scenarios to see which would be the more advantageous outcome for your investment over that long period of time. Focus on the values of the function and the instantaneous rates of change as you near 40 years. Here is Scenario 1: In Scenario 1, the house value went up over the 40 years of ownership. While the instantaneous rate of change started out at greater values, over time the rate of change has gotten smaller but remains positive. In Scenario 2, the house value fluctuated over the years but overall the value did not gain or lose much. The instantaneous rate of change went up and down through the 40 years. In terms of an investment, Scenario 1 presents an optimal outcome over the 40-year period, not only because of its higher value at the end, but also because of its constantly positive instantaneous rate of change. Essentially, Scenario 1 represents a house or neighborhood that is continually appreciating in value, whereas Scenario 2 represents a house or neighborhood that is stagnant and not appreciating in value at all. Lesson Summary In this lesson, you shifted your focus to long-term instantaneous rates of change, using your growing understanding to find an optimal situation for a particular situation, such as sales growth or the value of real estate. Here is a list of the key concepts in this lesson: You can determine which instantaneous rate of change is optimal by looking at points on a graph and how their curves behave over long periods of time. Knowing what type of situation is optimal allows people to make relevant decisions in both professional settings and in everyday life.

Reading Line Graphs to Draw Conclusions

Sometimes, the data you work with is not smooth or it does not have a function that fits the data. Even for raw data, you can still use a line graph to make sense of the data and draw conclusions. This will be very similar to dealing with input-output pairs from smooth curves. Consider this example with a line graph. Rosland is a small business owner who has been trying to grow her business aggressively in the last few years. Starting in 2015, she decided to more closely track her business's quarterly revenue, measured in thousands of dollars. The following table of data depicts this revenue data (measured in thousands) over the quarters, where t = 1 corresponds to the first quarter of 2015. QuarterQuarterly Revenue (in Thousands of Dollars)19.3529.4439.7149.8259.9869.9379.9689.9299.99109.98119.98129.91139.91149.92159.94169.96 When Rosland graphed the data using a line graph, she saw the following trend: [The graph shows the revenue of Rosland's business, measured in thousands of dollars, on the y-axis and the quarters of the year since 2015 on the x-axis. There is a fairly steady increase between quarter 1 through quarter 5, starting at about 9.35 thousands of dollars and increasing to about 9.98 thousands of dollars. After quarter 5, the revenue values stay between y = 9.9 and y = 10. The last few coordinates of the graph include: (12, 9.91), (13, 9.91), (14, 9.92),(15, 9.94), (16, 9.96).]©2018 WGU, Powered by GeoGebra There are two things to note on this line graph. First, reading the input-output pairs is the same as it has always been. For example, in the first row of data from Rosland's table, you can see that Quarter 1 had a quarterly revenue of 9.35, or $9,350. This means the line graph should start at the coordinate (1, 9.35), which matches the graph above. The second thing to note is that the line graph seems to indicate that the quarterly revenue generated by Rosland's business is leveling off. That is, it seems the quarterly revenue was climbing for several quarters and then seemed to stay right around 9.95. This tells Rosland that her revenues seem to be behaving like an asymptote. If this trend continues, Rosland's quarterly revenue will not grow and will instead be stuck around 9.95, or $9,950. Rosland needs to do something new to grow her quarterly revenues. Lesson Summary In this lesson, you not only identified input-output pairs from the graph of an uncommon function, but you also interpreted their meanings in context. Here is a list of the key concepts in this lesson: If the coordinate (a, b) is on the graph of the function f, then the corresponding function notation is f(a)=b. This is true for all functions, no matter what the function is. Interpreting coordinates or function notation for general functions is no different from interpreting coordinates or function notation for linear, polynomial, exponential, or logistic functions. Use the units of measurement of the independent and dependent variables and put the values in context. Coordinates are always written with the x-value, or the independent variable, first, followed by the y-value, or the dependent variable. Functions always find the output, or dependent variable value, for a given input, or independent variable value. Thus, function notation for the function f can be thought of as: F(input)=output.

Tidal Maxima and Minima

Sometimes, you are presented with graphs that are a little different from normal. Tide graphs and EKGs are good examples. In this section, you will see how you can still find minima and maxima using the same techniques. Consider this example: Maxima and minima happen all the time in real life, such as with ocean tides, which are variations in sea level throughout a day. Periods when sea level is low are referred to as "low tides," while periods when sea level is high are referred to as "high tides." This is a daily phenomenon for everyone who lives near an ocean. Following is a graph of actual tidal data. A diurnal tide usually occurs when the moon is farthest from the equator; this type has one high tide and one low tide each day. A semi-diurnal tide usually occurs when the moon is directly over the equator and is the most common type; it has two high tides and two low tides each day. A mixed tide can have two high tides and two low tides per day, but the levels are unequal; this type occurs when the moon is far north or south of the equator. The important thing to notice are the peaks and valleys, indicating minima (low tides) and maxima (high tides). Next is a more mathematical version of the mixed-tide graph. Given that this data starts at midnight, when are high tides expected for the next day? Refer to the following graph. There are four peaks here, so there are four high tides, roughly occurring at the t-values: t = 1.4, 7.3, 14.7, and 20.45. These peaks correspond to the times 1:24 a.m., 7:18 a.m., 2:42 p.m., and 8:27 p.m. Knowing when high tides and low tides are expected is helpful for ship captains since the extra few feet of water during a high tide is sometimes crucial for safety, and the loss of that depth of water during a low tide can be dangerous. Consider another real-life function for an electrocardiogram (EKG). An EKG is a record of the electrical activity of the heart, using electrodes placed on the skin. It can help doctors diagnose problems or detect patterns that develop over a period of time. Each part of the following graph shows an important piece of information to a doctor, with the repeat pattern among the most crucial factors. You may not know what each of these maxima and minima represent, but a cardiologist does. In the graph, point P is a local maximum. A cardiologist knows that this peak indicates electrical activity that triggers atrial contraction. Point R represents a global maximum for the graph and represents ventricular contraction. If a heart condition like atrial fibrillation were to occur, the EKG pattern would change. The maximum value would go up and the "P-waves" would disappear. It is important for doctors to understand the graph visually rather than making calculations while standing at an operating table. The next graph shows the EKG pattern for a patient experiencing heart problems. Even with this brief introduction to EKGs, you can see a distinct difference among the patterns in the following image. Lesson Summary In this lesson, you found and interpreted minima and maxima in context. You also learned why the ability to glean meaning from visual representations of functions can be so important. Here is a list of the key concepts in this lesson: The minimum value of a function is identified on the graph as a valley, or the lowest point, on its graph. The minimum is the actual y-value that occurs at the valley, while the x-value identifies where the minimum occurs. The maximum value of a function is identified on the graph as a peak, or the highest point, on its graph. The maximum is the actual y-value that occurs at the peak, while the x-value identifies where the maximum occurs. Some functions will not have local minima or local maxima. Some functions will not have global minima or global maxima. To interpret a minimum or maximum in context, focus on the independent and dependent variables.

When Concavity Does Not Exist

Sometimes, you will run into situations where there is no concavity. This will always be the case with line graphs, as you will see below. A developing country holds national election for its members of congress every other year. This is a line graph of the country's voter turnout rate, in percentage, since 2000. For convenience, name this function V(t) for voting turnout rate, in percentage, over time, in years since 2000. For this function, there is no concavity because the function's graph is not smooth and curved. It is normal to see functions without concavity, such as in graphs like this. Another common example would be stock market graphs. You might be tempted to say that the function is concave down from t = 0 to t = 4, since the graph could be seen as opening downwards on this interval. But someone else could come along and see the section from t = 2 to t = 6 as concave up, since the graph could be seen as opening upwards on this interval. The fact is that some people see the section from t = 2 to t = 4 as concave up while others see it as concave down. Because these sections of the graph are lines instead of curves, determining concavity can be completely subjective, or mathematically speaking, impossible. That is why graphs need a bit of curve to them to objectively and definitively determine concavity. Therefore, if you are asked to determine the concavity of a straight line, just know that you cannot do that. Lines never have a concavity.

Applying Average Rate of Change

Sparkit is an online video streaming company. Coming up is the graph of function R(s), which models Sparkit's monthly profit, in thousands of dollars, where s is the number of subscribers, also in thousands. You can now calculate the average rate of change from (20, 19) to (40, 31). Use the rate of change formula, you have: rate=y2−y1x2−x1=31−1940−20=1220=0.6thousand dollarsthousand subscribers. The result implies that when the number of subscribers increases from 20,000 to 40,000, each new subscriber brings an average of $0.60 net profit per month for the company. Next, calculate the rate of change from (40, 31) to (60, 51) rate=y2−y1x2−x1=51−3160−40=2020=1thousand dollarsthousand subscribers. This result implies that when the number of subscribers increases from 40,000 to 60,000, each new subscriber brings an average of $1.00 net profit per month for the company. Those two rates are different because the function is not linear. As the company has more and more subscribers, the rate of change in its profit increases. In other words, more subscribers bring higher net profit, per subscriber. The next applet allows you drag points A and B along the function to see the calculated rate of change in different parts. Verify that the more subscribers the company has, the higher the rate of change in profit. In this section's scenario, calculate and interpret the average rate of change from (10, 16) to (40, 31). The average rate of change is 0.5 thousand dollars per thousand subscribers. It implies that when the number of subscribers increases from 10,000 to 40,000, each new subscriber brings an average of $0.50 net profit per month for the company. Correct! rate=31−1640−10=1530=0.5thousand dollarsthousand subscribers. In a certain year, NetScription's customers decreased from 40,000 to 30,000, and its net profit dropped from $31,000 to $17,000. Using (40, 31) and (30, 17) to model those numbers, calculate and interpret the average rate of change. The average rate of change is 1.4 thousand dollars per thousand subscribers. It implies that when the number of subscribers decreased from 40,000 to 30,000, the company lost an average of $1.40 net profit for each subscriber who unsubscribed. Correct! rate=17−3130−40=−14−10=1.4 thousand dollars per thousand subscribers.

Demonstrating Relationships

Start with the example from the housing sector. Recall the main facts: In a relatively predictable pattern, rooms in a house and the number of people living there tend to correlate. For the sake of this exercise, assume that all houses under study have at least four basic rooms: a living room, a dining room, a kitchen, and a bathroom. Beyond that, for every two residents of a home, on average, there will be one bedroom. The original function is R(p)=0.5p+4, which models the number of rooms in a house, where p stands for the number of people living there. It could be useful when trying to find a house for a specific number of people, perhaps for a family trying to determine exactly how many rooms they need. The inverse function is P(r)=2r−8, which models the number of people living in a house, where r stands for the number of rooms. It is less intuitively useful, but still has value. A real estate agent would not want to waste time showing a home that is not large enough to suit a particular client's family. The agent could examine all of the houses for sale, quickly determine the maximum number of people each could comfortably hold, and group the available properties by those numbers. That way, when a new client calls, the agent would know exactly which houses to show. Example: Rona, a real estate agent, is looking to sell a home. She has three potential buyers, and she knows how many people are in each family. All three families are eager to see the house, and Rona knows that the first to see it will probably make an offer. Which function—the original or its inverse—would be more useful to Rona in determining which family should get the first viewing? The inverse function would be more useful here. Rona knows the number of rooms in the house, so she can calculate the number of people it will fit, then match it to the number of people in each family to decide which family would best "fit" the house and give that family the first viewing.

Interpreting Models for Spa Memberships

Sunrise Sky Spa and Retreat Spa are both located in the same small city and compete for clients, or members. Each spa offers membership packages to their clients. Their memberships from 2000 can be modeled by the following two functions, respectively: Sunrise Sky Spa: s(t)=7t3−174t2+1200t+1300 Retreat Spa: r(t)=−50.6t2+910.8t+1101.4 where t is the number of years since 2000. The correct way to read the notation s(t) is "s of t," meaning "sis a function that depends on the variable t." Here are their graphs: You can see that both companies were affected by the 2008 recession, but Retreat Spa's membership kept going down after 2008. Sunrise Sky Spa's membership, on the other hand, started to recover around 2011. Because Sunrise Sky Spa's membership has two turns in its trend while Retreat Spa's membership has only one turn, the degree of s(t) must be one higher than that of r(t). That is, Retreat Spa's function will be of the second degree, a quadratic equation, while Sunrise Sky Spa's function will be of the third degree, a cubic equation. You have learned how to substitute an independent variable value into a function to calculate its dependent variable value. However, calculations can become cumbersome sometimes, especially when a polynomial function's degree is high. Instead, you can estimate input-output pairs by using its graph and still reach valid conclusions. For example, when t=8, the value of r(t) is approximately 5,150, and s(t)'s value is approximately 3,350. In function notations, you can write r(8)≈5,150, s(8)≈3,350. In coordinate notation, you can write (8, 5150) and (8, 3350) for r and s, respectively. You can see how function notation is more helpful here, as there is no way to know which coordinate goes with each function without looking back to the graph. Also, it is important to be as accurate as possible when estimating coordinates from a graph. Notice that, on the y-axis, the distance from 5,000 to 6,000 is divided into 5 parts, making each part 10005=200memberships. This is why r(8)≈5,150. You will work on interpreting these values in context in just a moment. Consider this next example: Earlier, it was estimated that r(8)≈5,150, s(8)≈3,350 for these functions. As for interpreting these in context, remember what the independent and dependent variables are here. The independent variable is "years since 2000" while the dependent variable is "number of memberships." So, if r(8)≈5,150, the number of memberships Retreat Spas had in 2008 (8 years since 2000) would be about 5,150. Sunrise Sky Spa also had approximately 3,350 memberships at the same time. You also saw that s(12)≈2,750, which means that Sunrise Sky Spa had about 2,750 memberships in 2012. On the other hand, Retreat Spas had about 5,100 members in July of 2010. You know this because r(10.5)≈5,100. This means that you can interpret the context of input-output pairs by knowing which function you are working with and what the associated input and output variables are (that is, what the independent and dependent variables are). In this lesson, you learned that it is much easier to interpret input-output pairs in context by looking at the graph of functions and making estimations, rather than calculating the input-output pairs. Here is a list of the key concepts in this lesson: Estimate input-output pairs using a graph of the polynomial for less-exact numbers. Interpret input-output pairs if you know the function you are working with and what the independent and dependent variables are for that function.

Given the graph of an unknown function, translate solutions to equations into real-world meaning.

Sunrise Sky Spa has been on a bit of a roller coaster ride the last few years. It saw its membership rise during the initial years of the twenty-first century, then, between 2004 and 2005, things really took off, and membership shot up. After about three years, though, membership began to plunge, dipping even below its starting point in mid-2010, after which membership started rising again. In this lesson, you will estimate solutions to equations by examining graphs, presenting your solutions as either input-output pairs or as coordinates. You will also translate your solutions into real-world meaning based on the context of a given situation.

Graphing Coordinates of Functions

Suppose you have a $20 bill and would like to buy a chocolate bar from the corner store. If the chocolate bar costs $1.50, then you would get $18.50 in change. If you bought a 12-pack of soda for $12.50 instead of the chocolate bar, then you would get $7.50 in change. If the cost of any item is p, then your change is C(p)=20-p, where C represents your change and p represents the price. Picture this relationship by plotting the graph, namely all the points (p, C) that satisfy this equation. In the graph above, two of the coordinates, (1.5, 18.5) and (12.5, 7.5), are on the graph because: 1. If you spend $1.50, you get $18.50 in change. 2. If you spend $12.50, you get $7.50 in change. In fact, any combination of values will work as long as they "make sense" in this scenario. But what does it mean to "make sense"? You can also think of it in terms of the function itself, C(p)=20-p. Essentially, substitute these values into the function to see where these coordinates come from. For example: 1. C(1.50)=20-1.50=18.50 2. C(12.50)=20-12.50=7.50 In short, any set of numbers that satisfy the original equation will be a coordinate pair on the graph. Also, many features of the relationship between variables are easy to observe from the graph. For example, the fact that the line goes down as it goes to the right says that you get less change returned when you spend more. That makes sense, right?

Input-Output Notation and Coordinates

Suppose you manage a team for IT projects at your company. You know that each person on your team can take on two major projects a month. Therefore, if your team only has one person, it can take on two major projects a month; if there are three people on your team, six major projects a month. In terms of function notation, you could define the variables P for the number of major projects you can take on each month and t for the number of people on your team. This allows you to talk about the function relating these variables, P(t), and to also describe some input-output pairs for this function. The following table summarizes the function input-output notation for this scenario. DescriptionFunction Input-Output NotationIf your team only has one person, you can take on two major projects a month. P(1)=2If you have three people on your team, you can take on six major projects a month. P(3)=6If you have t people on your team, you can take on 2t major projects a month.P(t)=2t At this point, you are familiar with data presented in a coordinate plane as well. Sometimes coordinates can be more helpful than the function input-output notation, especially if you wanted to graph your data. For the IT team data, you could convert each of the function input-outputs into coordinate notation as in the next table: DescriptionFunction Input-Output NotationCoordinate PairsIf your team only has one person, you can take on two major projects a month. P(1)=2(1, 2)If you have three people on your team, you can take on six major projects a month. P(3)=6(3, 6)If you have t people on your team, you can take on 2t major projects a month.P(t)=2t(t, 2t) Once you understand the different ways to communicate real-world data in mathematical notation, you can also make more sense of the mathematical notation in context. For example, say that the monthly budget, B, your team has for all their projects is $500 per project, P, plus an additional $250. How would you interpret the coordinate (1, 750) in this context? independent This would mean for one major project, your monthly budget would be $750. In terms of the function notation, it is important to identify the independent and dependent variables here. Notice that the number of projects, P, determines the monthly budget, B. This means P is the independent variable while B is the dependent variable. This implies that B is a function of P, or that you should write the function notation as B(P). Therefore, for the coordinate (1, 750), the corresponding function notation would be B(1)=750. Either way, these are equivalent ways of communicating the same set of data as depicted in the following table: DescriptionFunction Input-Output NotationCoordinate PairIf your team only has one major project, the monthly budget would be $750. B(1)=750(1, 750)If your team has t major projects, the monthly budget would be $500t + $250.B(t)=500t+250(t, 500t + 250)

Concavity in Context

The 2008 recession caused a significant loss of business for Sunrise Sky Spa, but happily, its membership recovered once the recession was over. The following function's graph can be used to model the number of Sunrise Sky Spa's memberships, in thousands, since 2008: This function's graph is concave up. What does the concavity mean? First, notice that Sunrise Sky Spa's membership decreased at first (from t = 0 to t = 3) but then started recovering (from t = 3 to t = 8.5). The graph being concave up tells you that Sunrise Sky Spa was always recovering in some sense. Even when the number of memberships was decreasing, it was doing so at a slower and slower rate as time went on. Overall, the number of memberships was recovering, since the loss of memberships was slowing and turning toward an increase in memberships. Concave down would mean that the memberships were going down faster and faster, which would be a much worse situation for Sunrise Sky Spa. In this context, Sunrise Sky Spa's management would rather see a concave-up curve, because that curve implies faster and faster growth in the number of customers. Adding more context here, suppose that a new CEO for Sunrise Sky Spa was hired in 2008. In the 2008 recession, things were already headed downward. The new CEO could not be expected to turn things around on day one. However, the new CEO could start working to improve things and decrease the decline; that is, the new CEO would want to make this curve concave up. Sure enough, the new CEO accomplished this and by 2011 (t = 3), the number of memberships start increasing again. This does not mean that the new CEO was not doing a good job in the first three years. It just means that those first three years were dedicated to turning the business around. Consider another context, like a function modeling a company's debt. Would concave up or concave down be better for a company's debt? A concave-down curve would be best because it would indicate that the debt accumulated is increasing more and more slowly, or decreasing more and more quickly. Always keep in mind that concavity is telling you how the function's values are changing—are they increasing faster and faster, decreasing faster and faster, maybe increasing slower and slower, or decreasing slower and slower? The context of the situation can help you identify which situation would be best. For the example of the company's debt, concave down is the preferable state in any case. Learning Check In the following graph, is the function concave up or concave down from x = 0 to x = 0.5? The function is concave down from x = 0 to x = 0.5 . Locate point (0, 1) (which is x = 0), and then the ending point (0.5, 0) (which is x = 0.5), to find that the curve for that segment is slightly downward facing. The function is concave down from x = 0 to x = 0.5 . Locate point (0, 1) (which is x = 0), and then the ending point (0.5, 0) (which is x = 0.5), to find that the curve for that segment is slightly downward facing. The national debt is a regularly discussed topic in the United States. Would it be preferable that the national debt is concave up or concave down? It would be preferable that the national debt were concave down. It means that the national debt would be increasing slower and slower or even decreasing faster and faster—both would mean less accumulation of national debt.

Regression Equation

The algebraic expression of a regression line, also called a line of best fit, produced by a function of best fit.

Average Rate of Change

The average rate at which one variable is changing in relation to another variable's changing. Start with Grace, who has been exercising by jogging. She bought a smartwatch to track how long she jogs and how far she travels during that time. Examine the following table for the data from Grace's smartwatch. Time (in hours)Distance (in miles)0.2510.520.75314 Grace's jogging time and distance can be seen in the following graph. In the applet, you can drag point A and point B along the line, and the applet calculates the average rate of change between those two points. For example, if you drag A to (0.5, 2) and B to (1, 4), you will see the vertical difference, called rise, is 4−2=2, and the horizontal difference, called run, is 1−0.5=0.5. Those numbers imply, in 0.5 hour, Grace jogged 2 miles. To calculate her average rate of change, or speed in this scenario, you can divide: riserun= 2 mi0.5 hr= 4mihr. No matter where you slide points A and B to, the average rate of change stays the same. Why? Notice that the steepness of the line does not change; this implies the rate of change does not change on the line, regardless of the points you choose from the line. Key point: A linear function always has the same rate of change.

Without doing any calculation, compare the average rates of change for the first 8 months and the first 12 months. Which average rate of change is faster? Use the following graph to answer this question. Without doing any calculation, compare the average rate of change from the second to the fourth month with the average rate of change from the second to the sixth month. Which average rate of change is decreasing more? Use the following graph to solve this problem.

The average rate of change over the first 12 months is increasing more, because it corresponds to a steeper line. The average rate of change from the second to the fourth month is decreasing more, because it corresponds to a steeper line.

For the two logistic functions in the graph, compare their average rates of change from x = 0 to x = 100. At x = 16, $f(x)$f(x)has a faster instantaneous rate of change. Equations of two logistic functions are: f(x)=101+e−0.2x, g(x)=20 1+e−0.1x Compare the instantaneous rates of change when x = 100.

The average rates of change for these functions will be roughly the same. This is true in general for logistic functions At x = 16, $f(x)$f(x)has a faster instantaneous rate of change.. When the x-value is very large, logistic functions stop growing; thus, the instantaneous rate of change approaches 0.

The company's management would rather see the average rate of change from 2000 to 2010 since the time it takes to transfer 1 GB of data is decreasing more rapidly over this time period than it is from 2008 to 2016. This means that there were greater improvements to RAM speed in 2000 to 2010. The company's management would rather see the average rate of change from 2000 to 2010 since the time it takes to transfer 1 GB of data is decreasing more rapidly over this time period than it is from 2008 to 2016. This means that there were greater improvements to RAM speed in 2000 to 2010.

The company's management would rather see the average rate of change from 2000 to 2010 since the time it takes to transfer 1 GB of data is decreasing more rapidly over this time period than it is from 2008 to 2016. This means that there were greater improvements to RAM speed in 2000 to 2010. The company's management would rather see the instantaneous rate of change at the beginning of 2008.

Function Notation

The conventional way to write a function, precise enough to describe the function without a long written explanation.

Comparing Instantaneous Rates of Change

The cost of producing carbon fiber increases as the length of the fiber increases. You just compared their average rates of change. Now you will compare instantaneous rates of change. The instantaneous rate of change here tells you how much production costs are increasing at a particular length of the fiber. In this case, the "instants" are particular lengths. Seeing the cost increase, the engineer who redesigned the automotive component tried again with the goal of reducing the length of carbon fiber needed for the first version of the component. She lets you know that she now has a version that needs an 8-inch fiber and one that needs a 9-inch fiber. She is still testing both versions to see which one is better. The instantaneous rate of change tells how much the cost of production goes up as the length of the fiber increases beyond 8 inches. This can sometimes be more helpful than an average rate of change, which must use two different lengths of fiber to get an average. It turns out the instantaneous rate of change whenx= 8 is 6.41, meaning that the price increases by about $6.41 per fiber, per inch.That is, if the better version of the component requires that 9-inch length of fiber, the cost will be about $6.41 more. Comparing two different instantaneous rates of change can be a little tricky. A graph of the function is helpful in such cases. Use the following applet to compare some different instantaneous rates of change. For example, which x-value has the greater instantaneous rate of change, x = 4 or x = 7? Use the following applet to determine this. You can see from the graph that the instantaneous rate of change at x = 7 will be greater than the one at x = 4. To know the exact values for the instantaneous rates of change, you can move the sliders to 4 and 7; this would allow you to see the instantaneous rates of change 3.67 (when x = 4) and 5.58 (when x = 7). This means the cost of production is increasing more when the carbon piece is 7 inches long than when it is 4 inches long. All this information is summarized in the table, including an interpretation of each instantaneous rate of change. x-ValueInstantaneous Rate of ChangeInterpretationx = 43.67When the carbon fiber is 4 inches long, it costs about $3.67 to add an inch of carbon fiber.x = 75.58When the carbon fiber is 7 inches long, it costs about $5.58 to add an inch of carbon fiber Lesson Summary This lesson contained some complex concepts and cases. Working your way through the carbon fiber example and the call center at the Outdoor Climber's Friend proved you can meet the challenge. Here is a list of the key concepts in this lesson: The average rate of change from the point x = a to the point x = b can be seen and compared to other average rates of change by looking at the slope of the line through the points. The instantaneous rate of change at the point x = a can be seen and compared to other instantaneous rates of change by looking at the slope of the line that touches only that one point. If an exponential function is increasing, the rate of change is more significant as x gets larger. The opposite is true if the exponential function is decreasing. The context of a problem needs to be analyzed before you can know if a greater or lesser rate of change is better. When rates of change are negative, a smaller rate of change means that the amount is decreasing faster than a larger, or closer to zero, rate of change.. Consider the function modeling the price of carbon fibers again, c(x)=15×1.15x. Which interval below will have the greatest average rate of change? For the assessment, you should be able to do this just by looking at the graph. If you need some assistance for now, use the interactive applet to determine your answer. The line that passes through these two points has the greatest slope, and, therefore, it has the greatest average rate of change. For what interval of time will the average rate of change in the number of flies be the greatest? The average rate of change will be greatest between 11:00 a.m. and 2:00 p.m. The line that passes through these two points has the greatest slope, and, therefore, the greatest average rate of change. For which value of x is the instantaneous rate of change for c(x) the greatest? The line that passes through just x = 8 has the greatest slope. Having the greatest slope means having the greatest instantaneous rate of change. What is true about the instantaneous rate of change for the function c(x)? The instantaneous rate of change is always increasing.s x gets bigger, so does the instantaneous rate of change. The slopes of the lines that touch each point once continue to grow. If you have studied concavity in the polynomial unit, you may recognize this to mean that this exponential curve is concave up. For which interval does increasing the number of representatives decrease customer wait time the most? The customer wait time is decreased the most from 5 to 15. The average rate of change for customer wait time is the most negative, meaning that it is decreasing fastest.

Given the general shape of data in the following graph, which type of regression should you choose? Given the general shape of data in the following graph, which type of regression should you choose? Given the general shape of data in the following graph, which type of regression should you choose? Given the general shape of data in the following graph, which type of regression should you choose?

The data shows a roughly common rate of change. he data shows the shape of an arch, which implies a polynomial regression (degree of 2). There is also one turn in the data, meaning a polynomial of 1 degree higher is needed to model it (so degree 2). Of these functions, only polynomials can model this general shape. Moreover, since there are three turns, a fourth-degree polynomial should be used. Of these functions, only exponential functions can decrease so fast.

Range

The difference between the lowest and highest values in a set of data.

Webserver Memory Usage

The following graph depicts the peculiar activity that Johan noticed—the webserver's memory usage rate on a certain day, as percentages, were going up and down. The function driving this graph is definitely not linear, or exponential, and it is not easy to fit the data with a polynomial function, either; a very high-degree polynomial might work, but it would produce a very complex model. Even though you cannot model this graph with any common functions, you can still treat it as a function. Name it R(t), which models the web server's memory usage rate, in percentage, where tstands for time in hours since 0:00 of the day in question. For example, by looking at the function's graph, you can identify several coordinates and write their corresponding function notation, as well. Some examples are listed in the following table: Coordinate on GraphFunction Notation (0, 38) R(0)=38 (0.8, 25) R(0.8)=25 (3.6, 50) R(3.6)=50 (20.8, 70.5) R(20.8)=70.5 (24, 31.5) R(24)=31.5 What do these coordinates and the function notation mean? Well, the function notation is really saying the same thing as the coordinates; you just need to be able to interpret either one of them in context. Here you know that the independent variable measured the time in hours since 0:00 that day, while the dependent variable measured the percentage of the web server's memory being used at that time. So the coordinate (0, 40) means that at midnight, 12:00 a.m., of the day Johan was questioning, the webserver was using 40% of its allocated memory. The function notation R(0)=40 indicates the same thing. As another example, the coordinate (20.8, 70.5) indicates that 20.8 hours into the day, that is, at 8:48 p.m., the webserver was using 70.5% of its allocated memory. turned_inNOTE Remember that there are 60 minutes in an hour, so 8 tenths of an hour, or 0.8, means you are looking at 60 × 0.8 minutes = 48 minutes. Each of these coordinates and associated function notations can be interpreted very similarly. The following table summarizes a few of the coordinates and their interpretations: Point Function Notation Meaning (0, 38) R(0)=38 At 12:00 a.m., 38% of the web server's memory was in use. (0.8, 25) R(0.8)=25 At 12:48 a.m., 25% of the web server's memory was in use. (3.6, 50) R(3.6)=50 At 3:36 a.m., 50% of the web server's memory was in use. (20.8, 70.5) R(20.8)=70.5 At 8:48 p.m., 70.5% of the web server's memory was in use. (24, 31.5) R(24)=31.5 At midnight of the next day, or 12:00 a.m. the next day, 31.5% of the web server's memory was in use.

Approaching Zero

The following graph models the decreasing costs of computer equipment since 1999: As the x-value of the function increases (that is, as time goes by), the graph of the function gets closer to the x-axis (y = 0). This means that the asymptote is y = 0, but what does that mean in this context? An asymptote of y = 0 means that the average cost of computer equipment (at least, from 1999) is tending toward 0. You might be thinking that this does not make sense, since cheap prices are rare for computer equipment these days. However, current technology is more expensive because it has improved significantly from that available in 1999. As new technology emerges, new mathematical models must be made to see how its value holds up over time. A lot of people model data like this to estimate how the economy is growing, shrinking, or staying the same. The next graph shows the increased capabilities of computers over time. In this graph, as the x-value of the function gets big, the y-value of the function is increasing. However, as the x-value of the function decreases (that is, gets closer to 0), the function decreases toward the x-axis. The equation of the horizontal asymptote is y = 0, but what does that mean? It means that when computers were first invented, they had few of the capabilities today's computers have. Look at a screenshot from one of the first computers, the Commodore 64, and you will see that today's computers are worlds beyond what the Commodore 64 was capable of. Also notice that the Commodore 64 had only 64K of memory (that is, 64 kilobytes), hence the 64 in its name. Modern computers have about a million times more memory capacity than that. In this context, you can see why the capacity of computers does appear asymptotic. As you go back in time, the capacity of computers does tend toward 0. Remember, it is the fact that the function's values tend toward 0 (or toward the liney = 0) that indicate an asymptote. Estimate the equation of the horizontal asymptote of the function shown in the graph of a new coffee shop that is opening in town. The equation of the horizontal asymptote is y = 15. As the x-value of the function gets closer to 0 (or toward the negative x-direction), the graph of the function gets really close to the line y = 15 without ever crossing it. A new coffee shop is opening in your neighborhood. Business is a little slow at first, but it starts picking up as people talk about how good the coffee is. If the y-axis measures customers and the x-axis measures time (in days), how do you interpret the asymptote here? From the time the coffee shop first opened to about the 10th day it was open, there were about 15 customers per day; that is, the number of customers was pretty steady for the first 10 days. After that, business started picking up with more and more customers coming in each day. Lesson Summary In this lesson, you interpreted the meaning of several exponential asymptote such as those involving the cost of computer equipment, the rate of decay of carbon-14, and the growth in the customer base for a streaming video service. In general, you saw how asymptotes indicate that the response variable has a clear minimum or maximum sometime in the past or future. Here is a list of the key concepts in this lesson: To find the horizontal asymptote of a function, find the y-value that the function is getting really close to (or tending toward). To interpret the asymptote, look at what the response variable is measuring. This will tell you what the function is tending toward as the x-values either get bigger (more positive) or smaller (more negative). To find the horizontal asymptote even in a table of values, find what the specific value the y-values get really close to (or tend toward), which will be the value of the horizontal asymptote. Once you know what the y-values tend toward, you can interpret the horizontal asymptote in context. If the function is getting really close to the x-axis, then the equation or the horizontal asymptote is y = 0. A school starts a program to reduce food waste. The staff examines the graph of the function that models the amount of food waste in pounds x days after the program was started and notices that a horizontal asymptote at y = 10 starts to show up in the data. What does the horizontal asymptote mean in this scenario? The minimum amount of food waste is 10 pounds.The function is decreasing as it approaches the asymptote, but it seems they can never get the amount of food waste below the 10-pound threshold. An old social networking website is experiencing a decline in popularity. The number of users is decreasing over time and is modeled by a function whose graph has a horizontal asymptote of y = 100. What does this asymptote mean in terms of the situation? The number of users may be decreasing, but there seems to be about 100 users who continue to use the site as time goes on.

Age and Salary

The following scatterplot displays the correspondence between Clean Pro's janitors' annual salaries and their ages. To use this regression to decide how much budget will be available when some of the older janitors retire, you need to examine this model to decide if "Older janitors make more money" is a valid statement. The best measure of this is the correlation coefficient, r. Recall that if the correlation coefficient is close to 1 or -1, the independent variable, age, and the dependent variable, annual salary, are highly correlated; at minimum, it means that these variables are associated in some way. If the correlation coefficient is close to 0, the independent and dependent variables are not correlated. Moreover, if the correlation coefficient is positive, it means that as the independent variable increases, the dependent variable increases as well; a negative correlation coefficient means that as the independent variable increases, the dependent variable decreases. So, to answer the question about the correlation between age and salary, and ultimately about what will happen to the budget when older workers retire, you need to check the regression's correlation coefficient. But before doing any of that, do not forget to check SOME aspects of the model: For S, it appears that sample size is adequate here. For O, it appears that there are two possible outliers that must be attended to; they either need to be explained and kept in the data set or removed from the data set because they are true outliers. You might think that those outliers mean to stop interpreting this model at this point, but in fact, that is not necessary for this particular question. Why not? Outliers decrease the correlation coefficient, so if r = 0.95 with those two outliers, then the correlation coefficient will only improve, or the correlation coefficient will grow even closer to 1, if those two data points are removed. Even if these data points are reviewed and it is decided to keep them in the data set, meaning they are not true outliers, then the correlation coefficient will stay the same, and it will still be strong. Therefore, Clean Pros can answer the question even with these possible outliers included. Evidently, it is true that the older janitors make more money than the younger ones at Clean Pro Janitor Services, and when some of these older workers retire, the savings in salary budget will be significantly more than if some of the younger ones left the company. Here is another thing to keep in mind: You have only validated that it is true that older janitors make more money. You cannot say that the older janitors make more money because of their age. A correlation between two variables just shows that the two variables relate to one another, not that changes in one variable cause changes in another. This is a very common misconception when interpreting correlation coefficients. Never assume that two variables cause changes in one another just because they are correlated. A thorough research study is always required to show causation.

Calculate the Average Rate of Change

The formula to compute an average rate of change is (y2-y1) divided by (x2-x1). Given the coordinates (1, 2) and (3, 5), what is the average rate of change? Correct! Rate = 5−23−1=32 Given the coordinates (-1, 2) and (3, -5), what is the average rate of change? Rate = −5−23−(−1)=−74.

Clock Speed

The frequency at which a computer's central processing unit (CPU) runs; an indicator of the computer's speed.

What does the instantaneous rate of change at these two points tell you about this function? What does the instantaneous rate of change at these two points tell you about this function?

The function is decreasing faster at x = 12 than it is at x = 14.The line touching point B is steeper than the line touching point A. The function is decreasing slower at x = 0 than it is at x = 6. The line touching point B is steeper than the line touching point A.

Use the following graph of the Upscale Nest's revenues to determine the concavity from x = 1 to x = 2. Review the graph of the Upscale Nest's revenues to determine the concavity from x= 3 to x = 4. Which statement is true for the segment from point A to point B? Interpret the concavity of the graph between point C and point D.

The function's graph is concave down from x = 1 to x = 2. The function's graph is concave up from x = 3 to x = 4. The function's value was decreasing faster and faster. The function is decreasing faster and faster, which matches with the function being concave down from point A to point B. The company's revenue was increasing faster and faster. After a time of decreasing in the previous months, the revenue is now growing.This is a concave up segment and the function is increasing.

Identify the concavity at point A, if possible. Identify the concavity at point B, if possible.

The function's graph is concave up at point A. The function's graph does not have concavity at point B.

Starting Points and Slopes

The functions you are studying in this lesson are called linear functions because the graphs of these functions are lines. (The word "linear" means lines.) There are two important aspects of linear functions—starting values and slopes. Remember Seth's car trip? Seth began his trip from home, so his starting distance was 0 miles. After 250 miles he pulled over for some food. When Seth resumed his trip he was 250 miles from home. So, for the second leg of the trip (the blue dotted line in the following graph) Seth was starting from 250. In terms of the linear function, f(x)=mx+b , the value of b is the starting value of the function. For the first leg of Seth's journey b = 0, for the second leg of the journey b = 250. Next, you will learn about slope. A faster speed means a greater slope, while a slower speed means a lesser slope. There is even a positive slope (lines that increase) and a negative slope (lines that decrease). In the previous examples, Seth was always driving at 65 miles per hour. If he drove for two hours, he covered 130 miles. In three hours, he traveled 195 miles. You can see that the ratio from one hour to the next remain the same, because 651=1302=1953 is a true proportion. Another way to say this is that Seth travels 65 more miles for each additional hour on the road. Examine the graph of this function to notice that the line is increasing to the right (positive). If you look back at all the examples, in f(x)=mx+b, m has been the rate at which each function increases or decreases the steepness of the lines. In fact, that is exactly what m is; m tells you how fast the linear function increases or decreases, which is called the slope of the line. You will learn exactly how the slope, m, works in later lessons.

General shapes of data

The general shape of data is one important factor in deciding which model to choose for data regression. The nature of the scenario can also help. For example, for a mutual fund investment with a constant average percentage of interest every year, choose an exponential regression, even though the data could be modeled by a polynomial or linear regression. For the height of a free-falling object, choose a quadratic regression, which is decided by scientific facts. Lesson Summary In this lesson, you learned that most of the time, real-life data will not perfectly fit any of the common model types, so you need to be able to choose one by looking at the general shape of the data. Here is a list of the key concepts in this lesson: A high coefficient of determination (r2-value) alone is not enough to judge which type of function should be used to fit a data set. Linear regressions, which fit data sets that increase steadily, are simpler and easier to work with than other types of models. Polynomial regressions work best for data with turns, but polynomial models always tend toward infinity or negative infinity as the x-values get bigger. Exponential models work best for data increasing or decreasing at a constant ratio, but exponential models tend toward infinity or negative infinity as the x-values get bigger. Logistic models are good for data with limited minima and maxima. Models can always be updated with new data, allowing you to update models regularly.

The minimum for a particular interval, or range of a function. What was the global minimum here? When did the global maximum occur? When did the global minimum occur?

The global maximum was about 77% of bandwidth usage. The global maximum is the specific y-value that represents the greatest value of the function. The global minimum was about 48% of bandwidth usage. The global minimum is the specific y-value that represents the lowest value of the function. The global maximum occurred around 11:30 a.m. (t = 5.5). The global minimum occurred at the end of the day, 8 p.m. (t = 14).

Maxima

The greatest amount, or highest value, on a function's graph.

Independent Variable

The independent variable is the variable that explains, influences, or affects the other variable.A variable that, as it changes, affects another variable (called the dependent variable); for example, as a person pumps more gas into a car's tank (the independent variable), the cost of the purchase rises (the dependent variable). The independent variable is the variable that explains, influences, or affects the other variable. The independent variable is put on the x-axis, or horizontal axis, on a graph. Examples: Consider any occupation where someone earns an hourly wage. At $15 per hour, the number of hours worked influences total pay, so the number of hours worked is the independent variable. This is because the number of hours worked explains how much the individual is paid. In the example about children's height, the age of a child influences his or her height, so age is the independent variable. Clearly, a child's height does not influence his or her age. Finally, in the example about ordering computers, since the number of computers in the office drives the cost, the number of computers is the independent variable. Sometimes it can be hard to identify the independent variable without context. In fact, sometimes the independent variable can change depending on context as well. It is always important to pay attention to the context of variables in all situations. You own a car wash that charges $10 for each car that goes through. If you wash 50 cars a day, your revenue is $10×50=$500. If you only wash 20 cars a day, your revenue is $10×20=$200. Which variable is the independent variable? ince the number of cars washed affects your revenue, the number of cars is the independent variable.

Compare the instantaneous rates of change at 6 a.m. (x = 6) and at 8 a.m. (x = 8) in the following graph. Which rate of change increased faster? Compare the instantaneous rate of change at noon (x = 12) and at 2 p.m. (x = 14) using the following graph. Which rate of change decreased faster?

The instantaneous rate of change at 6 a.m. increased faster than at 8 a.m. because the slope of the tangent line at x = 6 is steeper. The instantaneous rate of change at noon decreased faster than at 2 p.m. because the slope of tangent line at x = 12 is steeper.

Maximum Value

The largest value in a set of data.

Input-Output Pairs on a Graph

The last piece in this lesson is estimating input and output values of a function from its graph, even without knowing the algebraic formula of the function. Consider this next example: Once again, consider Seth's trip to see how to use a graph to estimate the input and output values of a function from its graph, even without knowing the algebraic formula of the function. Examine the following graph. In this example, the x-coordinate of any given point tells you how many hours of driving Seth has done, while the y-coordinate tells you how many miles have been traveled. For instance, the point (1, 65) says he had traveled 65 miles after an hour, which certainly makes sense, and the point (3, 195) says he had traveled 195 miles after 3 hours. Now assume that Seth wants to know how far he will have traveled by the time he has been on the road for 4 hours and again for 8 hours. Locate the point at the intersection of the line and the 4-hour mark. The number on the y-axis shows how far Seth will have gone after 4 hours. He will have gone about 260 miles. Try it again for 8 hours of driving. Locate the point at the intersection of the line and the 8-hour mark. The number on the y-axis shows that at this time, Seth will have covered about 520 miles. Now, take another look at the graph Seth made to figure out which of four car deals would be cheapest in the long run. Seth is interested in knowing how much he would have paid for his new car at different times during the 72-month contract he intends to sign for each of the four offers. First, refer to the points labeled Deal A and Deal D on this graph, positioned at 24 months, or 2 years. How much would Seth have paid after 24 months if he accepts Deal A (point A)? How much at that same time if he accepts Deal D (point D)? Just by inspecting the graph, you can tell that for Deal A, Seth would have paid nearly $12,000 for his car after 24 months. For Deal D, it would have been $8,000. Now refer to the points labeled Deal B and Deal C, which represent the amounts paid for Deals B and C at 36 months, or 3 years. If Seth accepts Deal B, he would have paid about $12,500 after 3 years. If Seth accepted Deal C, he would have paid a little more than $11,000 after 3 years. This information might be very helpful to Seth if, for instance, he plans to sell this car before the contract is paid off at 24 or 36 months. In such a case, he might want to have as little as possible invested when he sells it, instead of paying off the car as economically as possible. You can probably see the usefulness of using a graph to estimate and interpret input-output pairs in situations like this. One way to tell if a situation is related to a linear function is to look at whether the change in the value of the function (or the output) is proportional to the change in the value of the input. Seven on-duty technicians can handle about 65 calls per hour. Ramona manages a call center where technicians answer calls from customers who have problems with their equipment. The following graph of f(t) models the number of calls the call center can answer per hour, where t is the number of technicians on duty. Correct! The point shows how many calls 7 technicians can handle together, and that is about 65. During a service outage, the number of calls at Ramona's center jumps to 212 per hour. If Ramona wants to keep the same level of service during this peak time, how many technicians are needed to handle all these calls? Correct! It will take 23 technicians to help approximately 212 customers in an hour. Summary In this lesson, you learned how inputs, such as time and speed, can tell you the output, such as distance, for a linear function. You also learned how a linear graph could help you see relationships between inputs and outputs, even without knowing the function itself. Here is a list of the key concepts in this lesson: To calculate the value of a function f(x)=mx+b with input c, multiply the value of c by the value of m and add b to the product. The value of m is the slope of the line, a measure of how steep the graph is. The value of b is the y-intercept, or the value of the function when x = 0. The numbers m and b in a linear function determine what the graph of that function looks like. You can estimate input and output values of a linear function by closely inspecting the graph of that function, even if you do not know the algebraic definition of the function.

Growing a Business: The Business Case for Limits on Growth

The last thing that you should always assess for a model is its validity. Think of validity as a last check to determine that the predictions of a model are not outside what is reasonably possible. Said another way, in this last step, you are looking at the implications of a model and making sure it does not imply something impossible, such as predicting someone could jump over a building, businesses that have more revenue than is financially possible, or computers running faster than light speed. Consider this example: The following scatterplot shows Amazon's revenues since 2000 with an exponential regression. [The graph plots Years since 2000 on the horizontal axis and Revenue in Billions of Dollars on the vertical axis. A curve rises with increasing steepness through (negative 3, 1) and (14, 96). A series of data points are plotted on the curve and the equation near the curve reads: f of x equals 2.4 times e to the power of left parenthesis 0.26 x right parenthesis and r squared equals 0.83.]© 2018 WGU, Powered by GeoGebra The chief financial officer (CFO) of Amazon wants to predict the company's revenue in 2050. This is done by using x = 50 in the regression function, producing: f(x)=2.4e0.26xf(x)=2.4e0.25(50)≈1,061,790 In 2050, according to this function, Amazon's revenues will reach approximately 1,062 trillion dollars. Is this a valid conclusion? Go through the four tools in SOME plus a new, fifth one—V, for validity—to see. For S, sample size, there are 14 data points here, so this is an adequate sample size. For O, outliers, there do not appear to be any possible outliers here. For M, model strength and model choice, with r = 0.91 and r2 = 0.83, model strength is strong. Also, an exponential function would be appropriate here. An argument could be made for a logistic function, but nothing in the data so far suggests the presence of the tell-tale S-curve. For E, extrapolation, this prediction of Amazon's revenues is an extrapolation, and it is an extreme extrapolation. This means this extrapolation should be performed and interpreted by a regression professional before it can be considered trustworthy. Say for a moment that someone was arguing that x = 50 was not too far to extrapolate out to, and to consider this prediction in earnest. In such a case, the final, fifth tool, validity, can be a great way to assess how reasonable a conclusion from this model is. Consider the validity of this conclusion: For V, validity, while it is theoretically possible that Amazon's revenues could reach $1,062 trillion, it is unlikely. As a comparison, the 2016 United States gross domestic product (GDP) was $18.57 trillion. GDP is the measure of revenue for all U.S. businesses. It is hard to imagine that Amazon's revenue would ever reach more than 50 times the amount of revenue for all U.S. businesses in 2016. Another thing to keep in mind is that nothing measurable is limitless. Even though there is no sign of the S-curve typical of logistic functions in the data set presented above, it may be prudent to try a logistic regression. Although a logistic curve may underestimate or overestimate Amazon's future revenues, it is very clear that this exponential regression is overestimating them. A logistic model would at least put a more reasonable and valid revenue capacity, or upper limit, on Amazon. One final way to think about it: Even if Amazon were the only retailer on the face of the planet, there still would only be a finite amount of money to be spent at Amazon, thus reinforcing the idea of an upper limit here. What should you conclude from all of this? Amazon's revenue will not likely reach $1,062 trillion in 2050, so this prediction should not be trusted. The root of the problem is extrapolating too far out into the future with an exponential function when a logistic function would be much more reasonable and provide more valid conclusions.

Local Maximum

The maximum for a particular interval, or range of a function.

Global Maxima

The maximum over the entire range of a function.

Local Minimum

The minimum for a particular interval, or range of a function.

Global Minima

The minimum over the entire range of a function.

Least-Squares Regression (LSR) Algorithm

The most commonly used regression algorithm (process), which is generally seen as the best technique for regressions. The least-squares regression (LSR) algorithm is used to find the best-fit line for a scatterplot. The best-fit line can also be called a regression line. You do not need to study the regression process or how it works, though, since you are focusing on interpreting the results of an LSR to make sense of the information the scatterplot offers. Now examine an updated version of the graph from the last example. The solid red line, with an equation f(x)=−0.45x+7.13, is the best-fit line. You can see from the line that as the time it takes for a technician to answer the call increases, the customer's satisfaction decreases-but maybe not as much as you would expect. The best-fit line is not simply a line that goes from the highest point on the left to the lowest on the right (or vice versa), but rather it is calculated by taking into account the positions of all of the points. A question on a survey asks customers to rate an IT technician's politeness. The following graph relates ten customers' responses to that question to the amount of time they were on hold and shows four possible best-fit lines.

Trends in Data Continued

The next area to examine is data trends. A set of data can trend up (increase), down (decrease), vary up and down, or just remain constant. Note that trends generally address patterns in the dependent variable, not the independent variable. In fact, it is often best to have a steady increase in the independent variable just so that comparing what happens in the dependent variable is as easy as possible. You are observing this data with the year as the independent variable and unemployment rate as the dependent variable. Notice how time increases in a steady manner (one year for each entry). However, time is the independent variable, so its change is not the trend you are interested in. The dependent variable, on the other hand, provides some useful information: The unemployment rate decreased steadily from 2009 to 2016, but not at the same rate every year. This pattern in data affects your trend line if you present this data in a graph, as follows: While graphs make it easier to see a trend, you also need to be able to see such trends in tables of data. Be sure you can see the trends in both tables and graphs. Now examine a set of data demonstrating an increasing trend. In this case, it is data speed as a function of year and technology type. In graph form, notice that while there is not a steady increase, the data still trends distinctly upwards:

Throwing Out Possible Outliersv

The next graph shows the number of PCs Progress Hospital purchased every year since 2000. A function of best fit is given with the corresponding coefficient of determination. Examine the function using the following interactive GeoGebra applet. Point G is an outlier. Why? There was a budget freeze at Progress Hospital in 2006. As a result, fewer new PCs were purchased than normal in that year. This outlier pulled the curve of best fit toward it, creating gaps between the curve and many points above it. As a result, the coefficient of determination is only r2=0.74. Due to the nature of this outlier, a budget freeze which did not happen again, it is reasonable to remove it from the data set in order to get a better curve of best fit. In the next applet, you will see a second regression function that has the outlier, point G, removed. You can see the general fit of the model is better, and the regression's coefficient of determination is almost perfect with r​2=0.99. In the Progress Hospital scenario, how did the outlier affect the coefficient of determination? Since the correlation measures how closely the function fits the data, and outliers are always further out from the general trend of the data and from the regression function, the outlier made the coefficient of determination smaller. The outlier overestimated the predicted value for the number of PCs purchased in 2016. Previousquestion Lesson Summary In this lesson, you learned that an outlier would affect a regression function's equation and graph and also decrease the coefficient of determination. Ultimately, outliers can interfere with any predicted values. If there is a good reason, such as a very rare event, remove outliers from a data set to improve the accuracy of predicted values. Here is a list of the key concepts in this lesson: Generally, outliers decrease the coefficient of determination. Outliers also change the equation and the graph of the regression function. If an outlier is visually spotted in the data or is known ahead of time, it is generally removed from the data set to improve the fit of the regression function. If you are given the choice among a scatterplot of real-world data, one with outliers, and one without outliers, you should generally favor the one without outliers.

Common Ratio

The number by which each term in a sequence is multiplied to find the next number in the sequence; for example 4 is the common ratio for the sequence [1, 4, 16, 64, 256...].

Online Gamers Continued

The online game Instinct Fighters has been released for two years. The following scatterplot presents data on the number of daily online gamers, in thousands, collected every two weeks of last year. Sarah wants to analyze the data pattern and make predictions. To this point, you do not have a lot of tools to use for dealing with this data. Soon you will work on identifying the kind of function that might model a data set like this one. For now, there is an easier way to work with this data—using a line graph. A line graph shows sequential data points linked with line segments. You have probably seen a lot of line graphs before, like the following graph that displays the data points from the previous scatterplot, with the linking lines added. [The scatterplot has Time Since January Last Year in Months plotted on the x axis and Number of Online Gamers in thousands plotted on the y axis. A set of data points are plotted closely together in an almost linear pattern and connected by line segments. The first point is located at (0, 12.1), data points rise in an almost linear pattern to about (6, 15), falls slightly, then rises again, and ends at (12, 18). Two data points are located away from pattern approximately at (1, 9.5) and (8, 17). The data points are labeled from left to right as follows: B 1, C 1, D 1, E 1, F 1, G 1, H 1, I 1, J 1, J 1, K 1, L 1, M 1, N 1, O 1, P 1, Q 1, R 1, S 1, T 1, U 1, V 1, W 1, X 1, Y 1, Z 1, A 2, B 2, C 2.]© 2018 WGU, Powered by GeoGebra As you may have guessed, points D1 and S1 are two possible outliers, or data points that are distinctly separate from the others. On further investigation, Sarah found that that D1's data was collected on the day of the Super Bowl, explaining why the number of online gamers was so low that day. The data for S1 was collected while a competing game was offline for an update, prompting an unusual number of players to resort to Instinct Fighters. Since these two data points lay outside the general trend of the data due to external influences that were unusual, you should consider them outliers and throw them out. Keep in mind that another good definition for an outlier is any point that lies outside the general trend of the data due to external influences. Just to recap, note that you should not remove possible outliers without investigation. Sometimes outliers happen for good reasons, such as unexpected data or information due to the situation itself (that is, no external influences) or errors in data collection. If the reason is unexpected but valid data, keep the data points in your set, as they are not really outliers as they are representative of a real trend. If the problem is an error in collecting the data, always correct these if possible. If the data cannot be corrected, the data points should be eliminated. After throwing out the two outliers, the new data set is represented in the following line graph. [The scatterplot has Time Since January Last Year in Months plotted on the x axis and Number of Online Gamers in thousands plotted on the y axis. A set of data points are plotted closely in an almost linear pattern. The first point is located at (0, 12.1), then the data points rise in an almost linear pattern to about (6, 15), falls slightly, then rises again, and ends at (12, 18). These points are joined with a line.]© 2018 WGU, Powered by GeoGebra You could now use this line graph to look at general trends in the data. For example, the line graph makes it easier to see that the number of online gamers has been steadily climbing from January of last year, x = 0, to December of last year, x = 11. In fact, it looks like the number of gamers went from about 12,100 to about 16,900 over this time frame. Sarah could now calculate an average rate of change over the year to see how quickly the number of players for Instinct Fighters is growing. Notice that she could also calculate the average rate of change without even having a function here; she could just use the coordinates (0, 12100) and (11, 16900). In summary, once you have deleted any outliers from a scatterplot, you always have an option to just view the data as a line graph and work with the data that way. You can even still use some tools, like average rates of change, on line graphs. You could now use this line graph to look at general trends in the data. For example, the line graph makes it easier to see that the number of online gamers has been steadily climbing from January of last year, x = 0, to December of last year, x = 11. In fact, it looks like the number of gamers went from about 12,100 to about 16,900 over this time frame. Sarah could now calculate an average rate of change over the year to see how quickly the number of players for Instinct Fighters is growing. Notice that she could also calculate the average rate of change without even having a function here; she could just use the coordinates (0, 12100) and (11, 16900). In summary, once you have deleted any outliers from a scatterplot, you always have an option to just view the data as a line graph and work with the data that way. You can even still use some tools, like average rates of change, on line graphs.

Given a data set and a proposed function to model the data, evaluate any real-world constraints that impact the model.

The online game Instinct Fighters was just launched. A scatterplot displays data on the number of daily online gamers since January 1 and web-server manager Maria wants to analyze the data pattern and make predictions about the number of future gamers playing the game. She will use a regression function to do so. Regression is used to calculate missing data and predict future data. However, there are limitations and constraints when running regressions. One issue is how many data points are needed to produce a good regression. Another issue is that a regression function only works within constraints or limitations. In this lesson, you will consider limitations on when and how models should be used.

Given a data set and a proposed function to model the data, interpret the corresponding coefficient of determination.

The online game Instinct Fighters was just launched. Web-server manager Maria wants to analyze the data pattern and make predictions, and she used a linear regression to model the data. When you run a regression, how do you judge whether the function fits the data? How do you decide whether one function fits the data better than another function? There is a measure to help with these questions and you will learn about it in this lesson.

Given a scatterplot of real-world data, determine which family of functions would be most appropriate to model the data.

The online game Instinct Fighters was released two years ago. A scatterplot displays data on the number of daily online gamers, in thousands, collected every two weeks of last year. Sarah is still working on her analysis of the data pattern. She is almost ready to find a function that fits her data so that she can make predictions on the future of her business. In real life, although you would rarely see a set of collected data which fits common functions such as linear, polynomial, exponential and logistic functions, you would still need to do a data regression with a common function. Sometimes this means using a less-than-ideal coefficient of determination, but it is necessary to make decisions based on the model. In this lesson, like Sarah, you will work on determining which type of function might work best for modeling given data. You will learn that you need more than coefficients of determination (r2-values) to select a type of function and that models can always be updated as new data is collected. You will also review the best uses for different types of regressions—linear, polynomial, exponential, and logistic.

Output Variable

The output, represented by one or more letters, such as m or d(t), produced by a function, based on the input variable.

What reason might explain Scenario 1 for the sale of Period Pens's new disposable gel pens? Use the following graph to answer this question The CEO of Period Pens is looking to improve sales from what you saw before. Below is a new scenario, Scenario 3, that tries to improve on the sales you saw in Scenario 2 from before. Look at these two scenarios when sales get to 400 weeks..

The pens ran out of ink rapidly so they became unpopular.! This is one possible explanation for why sales peaked early and continued to decline with a smaller and smaller instantaneous rate of change. The CEO would still like to see Scenario 2 because of its positive instantaneous rate of change in the long run. In Scenario 2, new pen sales continue to increase more and more compared to how they level off in Scenario 3.

Market Share

The portion of total sales in a given market made by a specific product or company, measured by percentage.

Information Age

The present time, when information is widely available to many people through computer technology.

Data Regression

The process of examining data points to determine a valid equation. "Regression" refers to bringing different data points down toward a normal average - "regressing" them.

Regression

The process of examining data points to determine a valid equation. "Regression" refers to bringing different data points down toward a normal average - "regressing" them.

Instantaneous Rates of Change

The rate of change at a particular moment, as opposed to average rate of change, which is change over a period of time. The average rate of change calculates the change over an interval, a specific segment of a line on a graph. However, the instantaneous rate of change is found for a particular point on the function's graph. What does this mean? Say that a car traveled 70 miles in 2 hours. The average rate of change over those 2 hours is 35 miles per hour. However, it does not mean the car was traveling at a constant speed of 35 miles per hour during the trip. At a particular time, the instantaneous rate of change (speed) could be 50 miles per hour, or 0 when the car stopped for a red light. In this lesson, you will learn about instantaneous rates of change and how they differ from average rates of change, which you already know about. FilmScription is an online video streaming company. This applet has a graph of a function, which models FilmScription's monthly profit in thousands of dollars, where s is the number of subscribers, also in thousands. The applet calculates the average rate of change between points A and B. For example, from (20, 19) to (40, 31), the average rate of change is y2−y1x2−x1=31−1940−20=1220=0.6 thousand dollars per thousand subscribers. The result implies that when the number of subscribers increases from 20,000 to 40,000, each new subscriber, on average, brings $0.60 net profit per month for the company. If you drag point A closer to B, say to 𝐴(30, 24), the rate of change becomes 0.7 thousand dollars per thousand subscribers. Also, notice that segment AB is very close to the function's graph. As you drag point A closer and closer to B, the rate of change becomes closer and closer to 0.8 thousand dollars per thousand subscribers. It's reasonable to estimate that at 𝐵(40, 31), the instantaneous rate of change is 0.8 thousand dollars per thousand subscribers. That instantaneous rate of change implies that when the number of subscribers reaches 40,000, the company's monthly profit is increasing by $0.80 per new subscriber. An instantaneous rate of change shows how the function's value is changing at a particular point. It shows the trend of where the function is going, how fast or slowly the function is increasing or decreasing. An instantaneous rate of change at point A is the average rate of change from A to B, where B is infinitely close to A. In the applet, point G shows the instantaneous rate of change at any point on the function. Drag G along the function and you will see the function's instantaneous rate of change increase as the point's 𝑥-value increases. The slope of the line through G shows the instantaneous rate of change. As the slope increases the rate of change increases. This implies that the more subscribers the company has, the more net profit it is making from each new subscriber.

Ratio

The relative sizes of two or more quantities; technically, the relationship of two quantities, normally the quotient of one divided by the other.

Input-output Pairs

The representation of an input value and an output value as an ordered pair, written (x, y).

When did the second low tide of the day occur? A ship you are on will arrive near its next port at noon tomorrow. The following graph indicates the tidal predictions for tomorrow. When is the safest time for the ship to travel into the port, to avoid getting stuck on sand bars?

The second low tide occurred around t = 11.2, which corresponds to 11:12 a.m. Your ship would likely wait until about 2 p.m. to travel into port, as this allows it to take advantage of a high tide and be as safe as possible.

Minima

The smallest amount, or lowest value, on a function's graph.

Minimum Value

The smallest value in a set of data.

Given two logistic equations modeling a real-world situation, identify the equation or model that represents an ideal situation based on the real-world situation.

The two remaining bidders for the Hillcrest Realtors computer replacement job are getting anxious. One, called Plan A by Hillcrest, advocates for getting more computers installed faster, then doing intensive testing, while the bidder for what Hillcrest calls Plan B thinks it would be better to spend more time upfront on testing, then completing the computer installation. In this lesson, you will learn more about how Hillcrest chooses between A and B. Previously you compared the rates of change for two logistic functions by examining their graphs and equations. In this lesson, you will take one more step, interpreting the comparison in context. You will learn that both long- and short-term trends can be measured by their rates of change and that doing so can help find an optimal solution from the available options.

Polynomial Degree

The value of a polynomial's largest exponent; that is, if the largest exponent in a polynomial function is 3, the polynomial is "of the third degree."

Given a real-world scenario either by written description or by graph, identify if the scenario would have an asymptote.

There are all kinds of natural limitations—variables that can only be so big or so small—on the variables in our lives. Computers and cars can only go so fast; populations can only grow so large; a person's effective work space can only be so small. All these natural limitations are tied to asymptotes and to logistic functions. In this lesson, you will see how to identify variables that have natural limitations, both in graphs and in written descriptions. Being able to identify variables with natural limitations will help you identify what those limits are.

One Common Function f(x)=mx

There are many everyday scenarios that can be expressed with common functions. Function notation is given as: y=f(x). Remember: this does not mean multiplying f times x. The notation is read as "the value of f at x" or just "f of x." That is, for a given input of x, you want to know the function's output, f(x). A very basic function; it means that the function of x is the product of a constant multiplied by a variable input, x. Although f is often used to represent a function and x is commonly used for the explanatory (input) variable, you really can use any letters you want. You could use g(x)or W(a). The point is to pick variable names that help you remember the quantities in question. The most basic type of function is f(x) = mx, where m is a number multiplied by the input, x. You actually use this function every day, though you might not think of it in these mathematical terms just yet. Consider these examples: A real estate agent generally gets paid 5% commission based upon the sale price of a property. The agent's pay can be modeled as: C(p)=0.05p, where C represents the commission and p is the price of the property. A tech support specialist takes 15 calls each hour. The number of calls in a workday can be modeled by the function: C(h)=15h, where C represents the total calls for the workday and h is the number of hours worked. ou go to the store to buy bananas, which are priced this week at $0.40 per pound. So at the counter, your cost can be modeled by the function: C(b)=.40b, where C represents cost and b represents how many pounds of bananas you get.

Interpreting Quantitative Variables

There are two broad categories of variables: quantitative and qualitative. You have already been working with quantitative variables—variables which can be measured and described numerically. Cost, time, speed, and distance are all quantitative variables. There are also qualitative variables, which do not have a numerical value but instead describe a quality of something. For example, colors, models of cars, political party affiliation, and computer brands are all qualitative variables; they describe a quality of something that cannot necessarily be measured. Sometimes you encounter a data set where both types of variables are present. For example, say you were tallying sales from a bake sale as in the following table: Baked GoodTotal Sales (Dollars)Cookies$25Cupcakes$40Muffins$45 Here, "Baked Good" is a qualitative variable; there is no inherent numerical value attached to the variable. Instead, it is used to describe what was sold. Sales, on the other hand, is quantitative. In a situation like this, it would not be appropriate to graph the data on a coordinate plane since you do not have two quantitative variables. You could graph this data with a bar chart, but that is beyond the focus of this course. You can, however, still use function notation to represent this data. For example, if B represents the type of baked good and S represents the total sales of a baked good, you could represent the previous table with the following function input-output notation: S(Cookies)=25 S(Cupcakes)=40 S(Muffins)=45 One other thing: While you cannot graph this data on a coordinate plane, you can make some quantitative comparisons. Muffins were the most profitable item, cookies yielded the lowest amount of sales, the sale made more money from cupcakes than from cookies, and so on. The following table shows the total number of cars of different brands on Nella's lot: Brand: Qualitative Cars available: Quantitative Since A is the dependent variable to B, the first row of the table could be rewritten as A(Honda)=5.

Average Rates of Change in Tables

Think about this: Sunrise Sky Spa had 1,000 members in January 2010, and 2,200 members in January 2011. This means that the average rate of increase (a form of average rate of change) of Sunrise Spa's memberships was 100 members per month in 2010. In many situations, the second statement is more useful because it shows the company's membership was growing at a certain rate. In this lesson, you will learn how to calculate the average rate of change using input-output pairs. You'll also revisit the formula for calculating average rate of change.

In the following graph, identify all the concave-up segments. Which statement is true for the segment from point F to point G? Which statement is true for the segment from point D to point E?

This graph is concave up from x = 1 to x = 2 and from x = 3 to x = 4. In interval notation, these would be the intervals: [1, 2] and [3, 4]. The function's value was increasing slower and slower. From F to G the function is concave down, so it is increasing more and more slowly. The application's CPU usages were decreasing slower and slower. The application was releasing less and less CPU resources. This is a concave up segment, so the function is decreasing more and more slowly.

Applications of Functions

This lesson reinforces the concepts you have encountered so far dealing with functions and their inverses in some new situations where back-and-forth conversions are especially useful. You will practice with using inverse functions when converting money, temperature, and measurements. You will also calculate the values of functions at given points using function notation.

Identifying Slope and Y Intercept

Throughout life you may find yourself in situations where you need to track rates of change. One example is weight loss. When you put yourself through the arduous process of calorie counting, measuring food, and journaling meals, you want to know that you will reach an ideal healthy weight in a reasonable amount of time. How do you determine how long it will take? To answer this question, you can use a linear function: If you set the goal to lose 1.5 pounds per week and your starting weight is 240 pounds, the formula W(t)=−1.5t+240 can predict your weight in 1 week, 5 weeks, or 10 weeks. In this formula, the t on both sides of the equation represents the number of weeks on the diet, the y-intercept is your starting weight of 240, and the slope is the rate at which your weight changes. In this lesson, you will learn how a linear function can be applied to a variety of scenarios and context.

Inverse Functions Continued

Thus far, you have examined data with a clear distinction between the independent and dependent variables. Sometimes, however, data can be viewed with either variable serving as input or output, without changing the meaning. Data sets that can exchange input for output in this way are referred to as inverse functions. Examine the following example of data that could be represented in either direction. The meaning of the data would not change if the instrument column were on the left and the individuals' names on the right, instead of the way they appear now. The question "What instrument does Cindy play?" is more suited to viewing the individuals as the independent variable and the instruments as the dependent variable. On the other hand, asking the question "Who plays piano in the band?" flips this relationship and views the instruments as the independent variable while the dependent variable is the individuals. Examine the following more practical example. This a model of Revenue, R, and Units Sold, S, for a company and the corresponding graph of the data. You could ask, "How much revenue do you get from the sale of 200 items?" This would be helpful for predicting revenue from sales. From a mathematical perspective, you are asking what the value of R(200)would be. Said in the context of the problem, R(200) is the revenue generated from 200 units sold. For this data, R(200)=1000. On the other hand, what if your company needs to generate an additional $1,000 of revenue next month to help stay on target for annual goals? The question then becomes, "How many items do you need to sell for a revenue of $1,000?" In this case, you are looking for S(1000)since S returns the number of items sold for an associated revenue. This turns out to be S(1000)=200, which is similar to what you saw before. Next is the graph of the function S from this perspective—that is, input a revenue, and output the associated number of items to sell. The main point is that these two questions look at the same data from opposite viewpoints. Said another way, S outputs the associated items sold for given revenue while R outputs the associated revenue for a given number of items sold. S and R do exactly the opposite of one another, which is why you call them inverse functions of one another. To denote an inverse function, you use the following notation: R--1or S--1. More generally, the inverse of f(x) is written f−1(x)and read "f inverse of x." In terms of the graphs of a function and its inverse, a "flipping" occurs which is noticeable in both the coordinates and the graphs of the functions. For example, notice below how the associated coordinates "flip" for the two functions S and R. Units Sold(S)Revenue(R)Coordinatesfor R(S) Coordinatesfor S(R)50250(50, 250) (250, 50) 100500(100, 500) (500, 100) 120600(120, 600) (600, 120) 150750(150, 750) (750, 150) 2001000(200, 1000) (1000, 200) Now examine the graphs of these two functions together. Did you notice the flipping, even between the two graphs?

Turning Graphs into Data

Tracy works in a video-streaming company and has been given some data from her boss about the "bargain movies" that they stream to their customers. Tracy's boss is not quite sure what to make of this graph, though. When looking at the graph, Tracy sees point A first. But how are the coordinates for point A identified? You need to find the corresponding values for this coordinate point on the x- and y-axes. For point A, the x- and y-values are both 0. This means that if someone streams no movies, the company makes no money. Tracy thinks, "That's logical." As a coordinate, this coordinate point would be represented as the ordered pair (0, 0). You also call this point the origin since this is where you always start graphing for all coordinates. Tracy then looks at the next point, point B. She knows that if someone streams 1 movie, the company makes $2. So the coordinates of this point are (1, 2). Remember that you always write the x-axis value first (on the left) and then the y-axis value next, on the right. Tracy then sees that the rest of the coordinates are: C(3,6) D(4,8) Essentially, an ordered pair gives us a way of matching up a value from the x -axis to a corresponding value on the y-axis. Notice that both a point on the graph and the point's ordered pair represent the same information in different ways—one is just a location (the graph) while the other is how you communicate that location in a written format (the coordinates, or the ordered pair).

Given a scatterplot of real-world data, a polynomial regression function for the data, and the associated coefficient of determination, interpret the regression function and the associated coefficient of determination in context.

Up to this point, you have learned how polynomial functions fit to data. In this lesson, you will see how those functions are created using software. You will not need to learn how to do this with software yourself. The important thing for you to know is how these functions are created. One number to use to judge how well the function fits the data is the coefficient of determination. You will focus on these skills because you will see more and more of this type of analysis in both your professional and your daily life because of how common "big data" is becoming. That means you will need to be skilled at spotting a problem when someone has done a bad job of finding functions that fit data. You will also need to know the questions to ask when presented with some of these models. In this lesson, you will see where some of those functions come from, using data and a process called regression.

Using Concavity to Identify Optimal Situations or Times

Using Concavity to Identify Optimal Situations or Times In this section, you will use concavity to identify the optimal times to be part of the Upscale Nest company. Since it is a business, you will naturally look at the revenue of the company and start to identify the most profitable times for the company. Consider this example: The Upscale Nest Home Decor's daily revenue, in millions of dollars, can be modeled as a function, R(t) in the following graph, where t is the number of months since January 1 last year. From (0, 4.25) to (2, 2.7), the function decreased, implying that the company's revenue decreased in January and February. This part of the function is also concave down, where revenues were decreasing faster and faster. These sales results are a sign to Upscale Nest that something needs to change. Maybe sales are declining so sharply in January and February because people are tired of over-decoration after the holidays. That may be true, but Upscale Nest's management feels that there must still be a solution. Clearly, the company would rather have the function be concave up for January and February. Concave up would mean that either revenues were decreasing slower and slower or that revenues were increasing at an increasing rate. Consider each of those situations in context; if revenues were decreasing slower and slower, this means that Upscale Nest is slowing down their bad sales; if revenues were increasing faster and faster, then Upscale Nest would have more and more revenue as time went on. Either of those would be far preferable to revenues decreasing faster and faster, which is a worst-case scenario for a retailer. Upscale Nest knows that change is needed. Perhaps it needs more marketing during these times or a special sale. Either way, concavity gives the company a greater insight on how severe these down times are. Think about what the best possible case would be for Upscale Nest. How would management feel about seeing a graph that implied revenues increasing slower and slower or one that implied revenues increasing faster and faster? If you said that revenues increasing faster and faster would be the Upscale Nest's ideal scenario, you would be correct. Keep in mind that increasing faster and faster corresponds to concave up.

Interpreting Asymptotes in Scatterplots Frances is an IT manager at another small firm. When the last major update rolled out at her firm, Frances captured the following data on the number of IT help requests over time (measured in days).

Usually, scatterplots are the first tool you have when looking at data. This means that sometimes you can identify and interpret asymptotes right from scatterplot data. Consider this example. When a major update rolls out in a small firm, the IT department always expects an increase in help requests. During the last major update at his company, the IT manager, Rick, collected the data on the help requests over time, measured in days. In the following graph of the data that Rick collected, can you see a horizontal asymptote? There are no asymptotes in this data.The y-values are not tending toward a specific value anywhere, so there is no asymptote here. This means that Frances is not seeing a specific number of help requests after the rollout of a major update.

Interpreting Instantaneous Rates of Change for Linear Functions

Velocity is a very common rate of change. For example, if you travel 325 miles in a car in 5 hours, you averaged 65 miles per hour (mph). However, there were very likely times when you were traveling faster than 65 mph and times when you were traveling slower than 65 mph. So you have both an average velocity (the average speed for the entire trip) and instantaneous velocities (the speed at any given moment that is shown on your speedometer). In this lesson, you will interpret instantaneous rates of change for linear functions. You will see why linear functions turn out to have the same instantaneous rate of change everywhere, but the skills you learn in this lesson will help you when you see instantaneous rate of change with other, nonlinear functions in later units. (in general use) speed.

Why Exponential Regression is not a good fit

What about an exponential regression as a solution to Sarah's quandary? Exponential regressions suffer from one of the same problems as polynomial regressions—they tend to go to infinity or negative infinity as they look to the past or future. In some cases, such as models of radioactive decay, this is valuable. However, when modeling things like populations or revenue projections, it is best to avoid infinity because in context, infinity is not realistic.

Asymptote Basics

What is an asymptote? The simple answer is that asymptotes are natural limits or boundaries. Consider the influenza virus, commonly called the flu. Within months or even days, the flu can spread to thousands of people. Pharmaceutical companies, which employ the people responsible for making the flu shot, are aware of the flu's ability to grow exponentially in time, and that it has the potential to kill thousands when it spreads at a faster and faster rate. Pharmaceutical companies create each year's flu shot with the intent of slowing down the disease's growth rate. Mathematically, the number of people infected with the flu at any given time cannot be less than zero since it is impossible to have a negative number of people. Notice on the graph how the y-values (the number of people infected) tends toward zero as you look at smaller and smaller x-values. When graphed, the y-values in an asymptote tend toward a certain value when the x-values either tend toward the positive or negative x-direction. When working with exponential growth, these natural limits or boundaries can sometimes be determined without computations. In the flu example, the naturally occurring limit is y = 0. This is the y-value that the function generally tends toward as its x-values get small. Later, you will see how to do mathematical computations to determine asymptotes, but for now you will just use reasoning to determine them. Here are some more examples of functions. Some are graphed with asymptotes, some are graphed without. Use these as examples to study identifying asymptotes graphically. No asymptote: This graph is of a linear function. Linear functions never have asymptotes. No asymptote: This graph is of a polynomial function. Polynomial functions never have asymptotes. Asymptote at y = 500: This graph is of an exponential function. Exponential functions always have just one asymptote. Asymptote at y = -40: This graph is of another exponential function. Exponential functions always have just one asymptote. Two asymptotes; one at y = 0 and the other at y = 10: This graph is of a logistic function. Logistic functions always have two asymptotes.

Given a data set and a proposed function to model the data, determine if the proposed function is the best function to fit the data.

What is similar about these scenarios? A city council is studying the effectiveness of recent funding and programs to help homeless people as it prepares next year's budget. A technician looking at the computing speed of CPUs in the past is trying to predict the speed of new CPUs in the near future. A business owner is creating a chart of the number of customers over time, trying to predict the number of customers in the future so she can make budgetary decisions. As you may have noticed, in all these situations, people are using real-world data to predict future values or trends based on current and past data. In order to do this, these people must choose what type of function best fits their data. In this lesson, you will learn how to determine which type of function is appropriate for a given set of data.

Given a data set, a proposed function to model the data, and an intended use of the model, determine if the use is in accordance with interpolation and extrapolation values.

What would you think if someone said that he could teach you how to predict the future? You might be skeptical because after all, predicting the future is supernatural. Well, believe it or not, prediction is not a superpower, just a mathematical skill. You can even use these same mathematical skills to predict values in the past. The ability to predict past and future values can be exceptionally useful. In this lesson, you will start learning about how to predict these values with mathematics and what to call these predictions for the past and future.

Attainable Predictions

When Muhammad was 30 years old, he had $50,000 saved for retirement. He then set a goal of having $1,000,000 in his retirement account by the age of 65. When he was 35, he had $150,000 in his retirement account. How was Muhammad doing at that point in his progress toward his goal? To find out, use a function to model his savings in thousands of dollars, using 𝑥 = 0 to represent when he was 30 years old. According to the given conditions, the function would pass (0, 50) and (5, 150). The function is in the following graph: The function passes (35, 750). This line of the function implies that despite Muhammad's good effort, if the trend continues as it started in the first five years of his saving, he would have only $750,000 in his retirement account when he is 65 years old. By what percent would he miss his goal, if he continues saving at this rate? The difference between the projected value and Muhammad's goal is 1,000,000−750,000=250,000. He would miss his goal by 250,0001,000,000=0.25=25%. Muhammad saw this projected result after five years of saving, so he adjusted his goal. Starting at the age of 35, he set a new goal to see $30,000 in growth per year in his retirement account. Now build a new function for him. This time, use 𝑥 = 0 to represent when Muhammad was 35 years old, so the point (0, 35) is on the function. In addition, the function's rate of change is 30 thousand dollars per year. The following graph represents this situation. When Muhammad reaches 50 years old, he is projected to have $600,000 in his retirement account. At this rate will Muhammad reach his goal of saving $1,000,000 by the time he is 65? This graph passes the point (30, 1,050), implying that he should have exactly $1,050,000 in his account at the age of 65. Muhammad is well on target to reach his $1,000,000 dream for retirement. When Muhammad reaches 50 years old, he is projected to have $600,000 in his retirement account. At this rate will Muhammad reach his goal of saving $1,000,000 by the time he is 65? This graph passes the point (30, 1,050), implying that he should have exactly $1,050,000 in his account at the age of 65. Muhammad is well on target to reach his $1,000,000 dream for retirement. Starting at the age of 35, Muhammad would see $30,000 in growth per year in his retirement account. In the following function, 𝑥 = 0 represents the year when he was 35 years old. When he was 60, he had $800,000 in his retirement account. By what percent was he missing his goal? Muhammad was missing his goal by 11.11%. Correct! To achieve his goal, by the graph, he would have saved $900,000 when he was 60 years old (𝑥 = 25). He missed the goal by $100,000, and 100,000900,000≈0.1111=11.11%. When deciding if a goal is attainable, determine if the goal is above or below the function's projected value. To calculate the percentage of attainment of a goal, or the result above or below the goal, find the difference between the goal and the actual performance number, and then divide the difference by the goal.

Review the following graph of the number of PCs installed for Plan A and Plan B. Which statement is true? Which statement is true? Which statement is true? Which statement is true?

When t = 1, the instantaneous rate of change of both functions is close to 0. It implies that the pace of installing PCs is slow on the first day. A flat line has a slope of 0, implying no rate of change. On the 20th day of the project, the pace of installing PCs under Plan A is faster than under Plan B. When t = 20, the instantaneous rate of change of a(t) is larger than that of b(t). When t = -15, the instantaneous rates of change of both functions are about the same. A flat line has a slope of 0, implying no rate of change.

Given a real-world situation and a graph of a polynomial function modeling the situation, interpret a maximum or minimum in context.

When you analyze this graph, identify when the stock is the most expensive and when it is the cheapest. If the price is treated as a function, those points' y-values are called the function's maximum and minimum. In this lesson you will identify and interpret maxima and minima in polynomial functions. You will learn that maxima and minima refer to response variables and you will also learn that when you view a polynomial graph, you can find maxima and minima by looking for high and low points.

Calculating Average Rate of Change Continued

When you calculate the average rate of change, you are finding the rate at which the function's output (y-values) changes compared to the function's input (x-values). When working with straight lines (linear functions), the average rate of change (slope) is constant. No matter which points you use to calculate the slope on a straight line, you get the same answer. Consider this example:When working with straight lines (linear functions), the average rate of change (slope) is constant. No matter which points you use to calculate the slope on a straight line, you get the same answer. Consider this example: Julian works at a constant rate and can clean 5 shirts in 20 minutes, 10 shirts in 40 minutes, and 15 shirts in 60 minutes. What is the rate of change? In tabular form, it looks like: x (minutes) y (shirts) 20 5 40 10 60 15 Think of each (x, y) pair as a point on a line. Since slope is calculated as "change in y" over "change in x," you can find both the change in y and the change in x using two coordinates on the line and the slope formula: slope(rate of change)=change in ychange in x=y2−y1x2−x1 If you use (40, 10) and (60, 15) as (x1, y1) and (x2, y2) in the formula, the slope is: slope=15−1060−40=520=0.25shirtminute Note that you could treat (60, 15) as (x1, y1) , and (40, 10) as (x2, y2), and you would get the same slope: slope=10−1540−60=−5−20=0.25shirtminute What if you chose two different points on the line, say (20, 5) and (40, 10) instead of (40, 10) and (60, 15)? Would you get the same answer? Try it and see: slope=10−540−20=520=0.25shirtminute The slope is the same, no matter which two points you choose. The slope or average rate of change is 0.25shirtminute, or 1 shirt cleaned every 4 minutes. Lines only have one average rate of change (slope), but please remember that in the real world, average rates of change go beyond lines. Keeping this in mind prepares you to work with other nonlinear situations in the future. For instance, maybe Julian gets tired as the day goes on and cleans shirts at a slightly slower rate. When working with nonlinear functions, the average rate of change is not constant. Luckily, the process of computing the average rate of change for nonlinear functions is the same as the process for straight lines: two points are chosen, and slope is calculated. Julian cleaned 60 shirts by 1:00 p.m. In the afternoon, he slowed down in work efficiency. By the end of his shift at 5:00 p.m., he has cleaned a total of 100 shirts. What is the average number of shirts Julian cleaned per hour from 1:00 p.m. to 5:00 p.m.? rate=100−605−1=404=10shirthour By the end of Julian's 8-hour shift at 5:00 p.m., he has cleaned a total of 100 shirts. What is the average number of shirts Julian cleaned per hour for those 8 hours? rate=1008=12.5shirthour Lesson Summary In this lesson, you learned how to calculate rates of change that were constant and rates of change that were not constant. Here is a list of the key concepts in this lesson: The rate of change (slope) formula is rate=y2−y1x2−x1. You often need to calculate average rates of change that are not constant (that is, nonlinear). For a linear function, the rate of change is the same, no matter which two points you choose. This fact is exactly what makes the function linear. For nonlinear functions, the rates of change could be different between different pairs of points.

Interpreting Average Rates of Change for Linear Functions

When you drive down the interstate with your cruise control on, you can keep going at a constant speed. When you do not have to hit your brakes, your average rate of change is your constant speed. This means that for all linear functions, you will see that their average rate of change is always the same. In this lesson you will calculate the average rate of change of linear functions, and you will see why, for linear functions, this rate is always the same.

Optimizing Sales for Period Pens

While a business can never truly predict how sales of a new product will go, it can benefit from knowing if sales are progressing optimally or not. A business can make better decisions, like whether to increase advertising or to discontinue sales, if it has the data and can analyze it. Now consider this example. Period Pens is releasing a new disposable gel pen to the market. The following graph represents two scenarios on how sales of the new pen progressed over the weeks after its release. Look at Scenario 1 first. In this first scenario, sales started off well but peaked after about 12 weeks. As the weeks marched on, sales dropped off. The instantaneous rate of change became negative and continued to decrease. The rate decreased more and more slowly and sales kept dropping. Now examine the following graph of Scenario 2. In the second scenario, sales grew slowly at first, had a surge heading into week 40, and then continued to increase slowly. The instantaneous rate of change remained positive the entire time, increasing dramatically at first and then slowing down. As the graph approaches 280 weeks, the company was selling about 116,000 pens per week and the instantaneous rate of change remained positive. As you can see by comparing these two situations, Scenario 2 would be optimal over Scenario 1. Period Pens certainly prefers to see a positive rate of change instead of a negative one.

Review Tables and Input-Output Pairs

While graphs are often viewed as the ideal method of data visualization, there are many cases when your purposes are better served by examining raw data. This is particularly true when you are hoping to compare inputs to their outputs quickly and without estimating them from a graph. Throughout this lesson, you will learn to compare outputs quickly in a variety of scenarios. You will learn that tables can be more useful than graphs when you need access to the details, that you can "read" trends in tables similarly to the way trends can be apparent in graphs, and that you can use tables to compare multiple variables if you use multiple columns. Roma, a hospital administrator, manages a county hospital. A couple of years ago, she decided to digitize all patient records. While this was a positive change for patient care, it took some getting used to. Here is the data on the number of help desk calls Roma's hospital made over the last five years. Take a minute to look over the data. Year Help Desk Calls (per Week) 114521623165Patient Records Digitized 45875465 Comparing the outputs, you can see a slight increase between years 1 and 2, an insignificant change in year 3, a massive increase in year 4, and then a moderate decrease in year 5. Pause for a moment to consider the value of being able to note the exact differences in these values in the real world of business decision-making. If you were graphing this data, the three-call increase between years 2 and 3 could be made to appear significant depending on the scale, when in fact it is pretty minor. Turn your focus to retail now. Jana is part of the leadership of a small retail business, Secrets of Venus, which sells inexpensive colognes and cosmetics. Secrets, as it is called by its loyal customers, has one store that has been successful for several years, and now the company thinks it is time to expand. One of the first things Jana must determine is how many more individuals she will need to hire to staff the new stores. After running some calculations, she determines that each new Secrets store will require 50 employees. She also finds that for every three Secrets stores that open, an additional 10 staffers are needed for the corporate office. In a case like this, it can be tempting to represent the data in a graph. However, if Jana then wanted to determine the exact number of new stores needed to maximize profits while minimizing hiring costs, she would have to take an additional step. She could not make that comparison without returning to the individual data. Knowing this, Jana organizes the number of hires per store for the first six new stores. According to the following table and assuming each employee is paid roughly the same, including the corporate employees, how much more expensive are salary and benefits for two stores as compared to one store? Is this true each time Jana opens a new store? Since Jana goes from 50 employees at one store to 100 employees at two stores, salary and benefits would roughly double (a 100% increase). This is not true in general, though. Going from two stores to three stores, Jana would increase from 100 employees to 160 total employees (a 60% increase). Another example: Oliver has owned a small landscaping company, Greener Pastures, for six years. Most years, Greener Pastures simply maintained its place in the market, but recently the company has experienced a small growth in profits, and Oliver was able to modestly expand the business. His employee growth over the past six years is shown in the table in the next question. In the Information Age, you often deal with much more complex data sets than the ones you have looked at so far. Understanding voting trends is part of being an informed citizen, and voting data can be some of the most complex you see. Next, you will look at some data on the Electoral College for the United States. In the United States, there are a number of trends that have become apparent in recent years. Perhaps the most notable trend is the fact that the political party controlling the White House has alternated every eight years since 1992. Next, you will look at some data on the Electoral College for the United States. In the United States, there are a number of trends that have become apparent in recent years. Perhaps the most notable trend is the fact that the political party controlling the White House has alternated every eight years since 1992. Which former president won election or reelection by the largest margin during the period between 1992 and 2016? How does the margin of victory compare to that of the 2016 election? Former President Bill Clinton achieved re-election in 1996 with 379 electoral votes; this is 109 votes above the necessary threshold and 75 votes above the margin in 2016.

Currency Conversion

Will has $12.57 in his pocket, and as he crosses the border from the United States into Canada, he wonders how much that amount is worth in Canadian dollars. Currency conversion is one of the most common business scenarios where both original and inverse functions are used, and in many cases, the numbers are far more significant than $12.57. How much does $2 million worth of auto parts cost in Korean wons? What is the payment for 5,000 shirts on a day when Indian rupees are trading for 0.016 of a U.S. dollar? What is the difference in profit if you accept payment for six tons of frozen chicken in Swiss francs instead of in euros?

Comparing Short-Term and Long-Term Rate Differences for Pen Sales

Will short-term instantaneous rates of change and the long-term instantaneous rates of change always be different? No. Just as there are many variables affecting real-world scenarios, there are many things that may happen. For example, recall Period Pens's release of its new gel pen. Consider Scenario 2, depicted in the following graph, and compare what happened in the short term at Week 20 and then in the long term at Week 240. In the short term at Week 20, the company had sold about 8,450 pens (remember that the unit for the number of pens is thousands), and sales are increasing by about 540 pens per week. At Week 240, in the long term, the company had sold about 111,000 pens and sales were increasing by about 120 pens per week. In comparing these two results, you can say that in both cases pen sales were increasing, though at a lesser rate in the long term. But still, more pens are being sold each week in the long term. In this example, the short-term and long-term rates are more similar than those in the Zonkerssituation. This scenario on pen sales would be favorable for Period Pens in the real world. However, be sure to note that in this and any other case, you cannot ever say that short-term instantaneous rates of change definitely predict results in the long term. Lesson Summary In this lesson, you examined several situations to compare the differences between short-term and long-term rates of change. You also saw how different instantaneous rates of change in the short term can be from how long-term rates of change play out. Here is a list of the key concepts in this lesson: Short-term results may not necessarily reflect what will happen in the longer run. Short-term rates of change cannot be used to predict what will happen in the long term. By comparing instantaneous rates of change in the short term and long term, you can see how things change overall in a scenario. Keep in mind that "short-term" here is referring to when the x-values are closer to 0, while "long-term" here is referring to when the x-values are further out from 0.

Given a real-world scenario modeled by a polynomial function, interpret why concave up or concave down would be optimal based on context.

Winter Gear Company manufactures and sells the best items you can find for cold weather—coats, hats, boots, mittens, even skis and snowboards. When would you expect the company to have its best sales—in June or in December? Since more people are going to buy cold weather clothes in December, that is a fairly obvious answer, but by looking at Winter Gear Company in this lesson, you will see how to arrive at much less obvious answers. Previously, you learned what concavity implies in different contexts. In this lesson, you will determine whether it is optimal to have a concave-up or concave-down function for a given situation; that is, you will look carefully at a situation and decide which is better for the individual or the organization.

A city is keeping a headcount of its homeless people. The number has been decreasing since last year, thanks to more funding to help them. The following scatterplot depicts the number of homeless people every month of last year, with a linear function to model the data. A scandal broke out concerning a food-service company, and there is a chance that the company may go bankrupt. The day after the scandal broke, the company's stock dropped to almost 0. The following scatterplot shows the company's stock price during that trading day, with a polynomial function drawn in to model the data.

Yes, a linear model is a good regression choice for the data because the data has a roughly constant rate of change. No, a polynomial model is not a good choice for the data. The function is going to negative values, but the stock price would not. A stock cannot be worth less than nothing. An exponential function would be a better fit for the data, as it looks like the stock price dropped by approximately 50% every hour, implying the function should have a common ratio.

Examine the following graph about Amazon's revenue. Can you use the regression function to estimate Amazon's revenue in 1999? Would that result be trustworthy? Why? Can you use the regression function to estimate Amazon's revenue in 1990? Would the result be trustworthy? Why?

Yes, a regression function can be used to estimate Amazon's revenue in 1999. The value x = -1 is very close to the data range, so the result would be trustworthy. No, a regression function cannot be used to estimate Amazon's revenue in 1990. The value x = -10 is not close to the data range, so the result would not be trustworthy. The smallest trustworthy x-value here would be x=xmin−0.5×range=0-0.5(13)=0-6.5=−6.5.

Erika is another small business owner who decided to track the quarterly revenues (measured in thousands of dollars) for her small business. The following graph depicts the quarterly revenue where t = 1 corresponds to the first quarter of 2015. From the data in the line graph, does it look like there may be an asymptote involved in this scenario?

Yes, it looks like there is an asymptote around y = 21.5. The y-values tend toward 21.5, starting around x = 5.

Read the scenario and determine if there would be any asymptotes: When Jenny was born, she gradually got taller for quite a few months during her toddler years. After time, though, she start to get taller more quickly until reaching full height at 5'4" somewhere in her late teen years. She stayed about the same height for the rest of her life. Read the scenario and determine if there would be any asymptotes: Carlos is opening a business, and his number of customers starts growing very steadily. After several years of being in business, Carlos has to add on to his business to keep up with the growth. What are the two asymptotes of this function?

Yes, there are two asymptotes here. The first is the lower asymptote, when Jenny was born, likely somewhere around y = 20 (this is the average height of a baby at birth). The second is the upper asymptote when Jenny was fully grown, which would be at y = 64, assuming y is measured in inches. No, there are no asymptotes here. Since Carlos's business is growing steadily, a linear function may be a good candidate to model this situation. In any regard, there do not seem to be any lower or upper limits since growth is steady for Carlos's business. y = 60, y = 10

Does the graph contain any horizontal asymptotes? If so, toward which value are the function's y-values tending?

Yes, this graph has two asymptotes. On the left side, the y-values tend towards y = 4, and on the right side, they tend towards y = 3.

The following graph models the number of users, in millions, on a social media platform since 2000. The equation for this model is f(x)=2.99×e0.09x. The value at t = 9 corresponds to a pilot program that sought to recruit many new users, many of whom did not continue using the social media platform. This data point was therefore excluded from the model.

Yes. This model can be used to extrapolate the number of users in 2017, which would be approximately 13.8 million users.The r2-value for this model is 0.72, indicating a strong model. This means you can go out as far as xmax+(0.5×range)=12+(0.5×12)=18 for extrapolation values. To determine the extrapolation, substitute t = 17 into the equation and find f(17)=2.99×e0.09×17=13.8.

Estimating Input-Output Pairs

You already know how to evaluate an exponential function's value by substituting a number for the independent variable. For example, to predict the number of memberships at Best Movie Rental in the year 2020, you would substitute x = 20 into B(x), and you have: B(x)=500,000×0.8x B(20)=500,000×0.820=500,000×0.0115292⋯≈5765. If that trend continues, Best Movie Rental will have about 5,765 memberships left in 2020. This is a good opportunity to remember that B(20)=5765 is written in function notation. This notation implies that the point (20, 5765) is on the following graph of B(x). Both B(20)=5765 and (20, 5765) represent the same information, just in two different formats. Next, estimate the value of the function P(x) when x = 20. On the graph, point D's y-value is what you are looking for. On the y-axis, the distance between 100,000 and 200,000 is divided into 5 grids, making each grid 100,000÷5=20,000 units. It is reasonable to estimate that point D's coordinates are (20, 118000), or P(20)≈118,000 in function notation. To get an exact value for P(20), you could plug x = 20 into the equation for P. That would give you this calculation: P(x)=3000×1.2x P(20)=3,000×1.220=3,000×38.3375999⋯≈115,012.7997... The estimate of 118,000 memberships in 2020 from looking at the graph was pretty accurate. Keep in mind that even though you are estimating from a graph, you should be as precise as possible to minimize your degree of error. Estimate the function's value at x = 2.2. When x = 2.2, the associated y-value is about 380, giving us f(2.2)=380. Estimate the corresponding x-value when y = 800. f(2.95)≈800 On the x-axis, each grid represents 0.2 units; on the y-axis, each grid represents 20 units Lesson Summary In this lesson, you learned that an exponential function could increase (that is, grow) or decrease (decay) and does so at a constant ratio. Sometimes estimates from graphs can be used instead of using exact values calculated through a formula. Be sure to read labels on the axes to figure out how many units each grid represents. Even though you are estimating, you should be as precise as possible to minimize degree of error. Here is a list of the key concepts in this lesson: The function f(x)=C×ax increases or grows if a > 1; these function models are called exponential growth. The function f(x)=C×ax decreases if 0 < a < 1; these function models are called exponential decay. For values of a that are negative (that is, a < 0), the function f(x)=C×ax is beyond the scope of this course. You can change a function's input-output notation for exponential functions into coordinates; for instance, if y = 20 when x = 2, then the coordinate (2, 20) is on the graph of the exponential function. Use the information given in a graph to translate input-output pairs into real-life meaning by analyzing the information mathematically and logically..

Calculating and Interpreting Average Rates of Change

You already saw how to calculate the average rate of change for a couple of situations, such as for Mappit's new website. The function that modeled the number of visitors to the Mappit website is f(x)=10,200×1.1x. You used this function to find the average rate of change for the number of site visitors from day 5 to day 9. You followed these steps: Step 1: Substitute 5 and 9 for x in the equations: f(5)=10,200×1.15=16,427 and f(9)=10,200×1.19=24,051. Step 2: Set up and calculate using the formula: 24,051−16,4279−5=1906. But what does this mean? Sure, it is the average rate of change, but what does the average rate of change mean in this context? An easy place to start is the units. The unit of the numerator is "site visitors," since the dependent variable is in the numerator and measures "site visitors." The unit of the denominator is "days," since the independent variable is in the denominator and measures "days." Therefore, the unit for the rate of change is "site visitors per day." This calculation finds an average, which is the amount of change in the number of site visitors per day on average. That is, if the number of visitors changed by the same number each day, which it likely did not. From all this, you can see that an average rate of change of 1,906 from x = 5 to x = 9 means more than just the number. The average rate of change is "from day 5 to day 9, the number of visitors increased by an average of 1,906 people per day." That is a bit of a mouthful, but it describes the situation nicely. Moreover, it gives you an actual number that describes the growth. This would be helpful for planning future resources or infrastructure needed for the website. Remember, the calculations for average rate of change and slope are the same. You can take this time to practice calculating the average rate of change. If you feel comfortable calculating it, go ahead and use the following applet to see the average rate of change from day 5 to day 9. In the applet, you should have the two points: (5, 16,427) and (9, 24,051). You can then see the corresponding line through those points. Remember thatthe slope of the line through the points is the average rate of change. That is, the slope of the line that passes through these two points is the average rate of change from day 5 to day 9. It turns out that this line has a slope of 1,905.97. You can also try moving the points around and see the changes that occur in the slope and the average rate of change for different intervals. Given two points, how can the average rate of change be shown in a graph? The average rate of change can be shown by drawing the line that passes through the two points and finding the slope of the line. The average rate of change and slope of the line through these points are the same. Franco has built a new smartwatch app that measures brain activity. He releases the app and wants to track the number of users who have installed it using the exponential function f(x). The variable x represents the number of weeks after the app was released. The average rate of change from x = 6 to x = 12 is 12,876. What does this rate of change mean? This rate of change means that from week 6 until week 12 after the app was released, the number of users increased by 12,876 users per week on average. The average rate of change for the given time period is 12,876. This is the average increase in the number of users. The exponential function k(x) represents the value of a smartphone x months after purchase. It is given that k(3)=400 and k(10)=316. What is the average rate of change in words? From 3 months after purchase until 10 months after purchase, the value of the phone decreased by an average of $12 per month. Lesson Summary In this lesson, you saw more examples on how to calculate and interpret average rates of change. In particular, you looked at two fictional organizations, Mappit and Assistance A-Bounds. Here is a list of the key concepts in this lesson: The average rate of change needs to be described using a unit of change, such as dollars per hour or feet per second. It is critical to pay attention to the units in the numerator and denominator, which will help identify the unit of change. Average rate of change is the slope of the line that passes through the points for which you are trying to find the average rate of change. The average rate of change from x = a to x = b is calculated using the slope formula, m=f(b)−f(a)b−a. You can also use this version of the slope formula if you prefer: m=y2−y1x2−x1. Unlike for a linear function, average rates of change for an exponential function vary depending on which points are used. The variation in the average rates of change for an exponential function is similar to the variation in a polynomial function.

Independent and Dependent Variables on.a Graph

You are explaining a graph to your new employee, Jamal, at your Double Dip! ice cream franchise store. The point of the graph is how outside temperatures affect ice cream sales. You explain that the warmer it is, the more ice cream you sell, and that the colder it is, the less ice cream you sell. Jamal asks which variable is the independent variable and which is the dependent variable. You will be able to answer that question shortly. In this lesson, you will see how independent and dependent variables are positioned on a graph and why context is so important when deciding whether a particular variable is an independent variable or a dependent variable.

Calculating Average Rate of Change

You can see how helpful an applet is, but how do you calculate the average rate of change without using an applet? There is a formula you can use. To calculate the average rate of change by points (x1, y1) and (x2, y2), the formula is: rate=y2−y / x2−x1 Recall that Grace jogged 2 miles in 0.5 hours, and 4 miles in 1 hour. You can translate this data into two points: (0.5, 2) and (1, 4). The points can be labeled (x1, y1) and (x2, y2). Note that subscripts are used, because superscripts mean exponents, which is not what you want here. In this scenario, x1 means "the x-value of the first point," and y2 means "the y-value of the second point." Apply the formula, and you have: rate=y2−y1x2−x1=4−21−0.5=20.5=4mihr

Linear and Polynomial Patterns

You have been presented with situations that already have functions attached to them. Now you will start with the data and find a function to get a better idea of how the modeling process plays out for real-world problems. You have learned about linear, polynomial, exponential, and logistic functions, so these will be the functions from which you will choose. Sometimes the shape of the scatterplot or the details of the situation may be enough to know which type of function to use. Other times, you will need to use a process of elimination to determine which function to use. Start with this example: Jessica, a real-estate agent, has a client who wants to rent out her 2,000-square foot house. However, there are very few comparable houses in the neighborhood to determine how much to charge for rent. To help her client, Jessica collected some rent data in the larger neighborhood. She then made the following scatterplot. By the shape of the data, which forms a nearly straight line, it is reasonable to run a linear regression here because linear functions model data with a constant rate of change. In this situation, it also makes sense for an owner to charge rent based on square footage at a fairly constant rate. So, by the shape of the scatterplot and the situation, you are able to determine that a linear function would be best here. Look at another data set. Sulaiman is a contractor for large auto manufacturers; he tests auto performance and reports the results. Today, he is testing the acceleration of a new car. This next scatterplot depicts data on the distance of the car during acceleration. From the curved shape of the scatterplot, it is clear that this is not a linear function. That means that the remaining possibilities are polynomial, exponential, or logistic functions. Given the scenario's context, the distance data was collected when the car was accelerating, meaning that the distance should be allowed to get as large as possible since the car could theoretically keep going forever. Therefore, the logistic model should also be eliminated. Remaining for consideration are polynomial and exponential models, and in the context of this course, that is sufficient. That said, look at the shapes of some common polynomial functions. When a scatterplot matches the shape of one of them, you could use a polynomial regression. As it happens, the time-distance function is quadratic when there is a constant acceleration. Because of that, a polynomial regression would be ideal here. Remember that higher-degree polynomial functions have more turning points. However, higher-degree polynomials are also harder to work with and because of that, they are rarely used to model real-life data. Also, when a data set has a lot of turns, it is harder to argue that there is a pattern in the data, such as in the following graphs.

Making Predictions

You have been put in charge of finding a new vendor for the cell phones your company provides to its sales team. You have narrowed the choices down to Rush, which offers a cell phone plan for $60 per line, and Rely-a-Phone, which offers a cell phone plan for a $50 flat fee, plus $50 per line. The two companies are comparable in terms of service and dependability, so it all comes down to price. Which is a better deal? Examine their graphs: The cost depends on two factors: which company, and how many phones. In this situation, there are two input values, which determine the cost. If a function has more than one input value, it is called a multivariate function. In function notation, you write f(x,y)=z, where x and y are input values, and z is the output. Let C(b,n) model the cost of purchasing n cell phones from company b. If you purchase 5 cell phones from Rush, according to the first graph, the cost is $300. In function notation, you write C(Rush,5)=300. Note that you could have defined the function as C(n,b), where n is the number of cell phones and b is the company. In that case, you would write C(5,Rush)=300. To avoid confusion, you should always define your function before using function notation, so readers know what each variable means. The next section presents a similar scenario to the previous one and is provided only if you want some additional practice on this topic. If you are feeling comfortable about this topic, just skip to the next section.

Goals and Graphs

You have learned a lot about functions so far. The real purpose, though, is to use functions in real life. In this lesson, you will learn to use functions for several important real-world purposes—to make predictions, set goals, evaluate actual performance in relation to a goal, and determine if a goal is attainable.

Given a data set, a proposed function to model the data, and an intended use of the model, determine if the use addresses error in predictions.

You have learned about outliers, interpolation, and extrapolation, but you might still be wondering how those all fit together for using models to predict future values. Of what practical use is this information? Consider this example: For a team of forest rangers trying to understand the dynamics of a population of wolves in a large forest, creating reliable predictions based on interpolation and extrapolation is very important, along with the appropriate way to deal with outliers. On one side, local ranchers are up in arms about wolf predation. Environmentalists and animal activists protest every weekend to protect the wolves. Hunters are another interested group; they would like to see more wolves to hunt. With all these conflicting pressures, the rangers must be sure that their predictions for the wolf population are accurate. In this lesson, you will learn how to use a regression model appropriately to find answers to problems like the wolf population. You will also examine the use of a proposed model in detail.

Interpeting Inputs and Outputs for Polynomial Functions

You have learned to model real-life situations with linear relationships. However, many input-output relationships are more complicated than a straight line can represent. Take ice cream shop revenues as an example. Ice cream sales are higher in warmer months and lower in cooler months. If you were to graph the sales of ice cream each month for a year, the graph would be curved, meaning a line cannot represent this relationship. In this lesson, you will learn how to model input-output data using polynomial functions which have graphs that are curved rather than straight. More specifically, you will learn about polynomial functions that can handle increasingly complicated situations and how to apply the order of operation rules to calculate outputs for polynomial functions.

Given a real-world situation modeled by an exponential function and a rate of change at a specified x-value, interpret the rate of change in context of the real-world situation.

You have probably spent time comparing different rates of change in linear situations whether you thought of it that way or not. Perhaps you drove from Minneapolis to Chicago. If you drove at 65 miles per hour, the trip took nearly an hour less than if you drove at 55 miles per hour. What if the rate of change were not constant, as in an exponential function and in the real world? In this lesson, you will interpret rates of change for exponential functions in different contexts, and you will learn how to compare average rates of change.

FasterAid and Horizontal Asymptotes

You have seen horizontal asymptotes before. All exponential functions have one horizontal asymptote and all logistic functions have two. If the function's y-values tend toward a specific value as a function's x-value becomes very large (either positive or negative), then the function has a horizontal asymptote. In other words, if a function's y-values get closer and closer to some horizontal line, then the function has a horizontal asymptote. Say that the function C(t) models FasterAid's customer satisfaction rating, as a percentage, where t is the number of months since the training program started. The following applet is a graph of C(t). Note that this graph does not look like any of the other functions you have seen in this course. That does not mean that you cannot apply the same principles of asymptotes to these general functions. Slide point A along the function. When you slide the point to the far left, its y-value gets closer and closer to 50. The function has a horizontal asymptote at y = 50 as its x-value becomes smaller and smaller. This implies that before the training program started, the company's customer satisfaction rating was stable at 50%. When you slide point A to the far right, its y-value gets closer and closer to 70. The function has a horizontal asymptote at y = 70 as its x-value becomes larger and larger. This implies that after the training program ended, the company's satisfaction rating became stable at 70%. In this scenario, there are two limiting factors: First, for an average FasterAid customer service rep, it is difficult not to satisfy half the callers. This is why the customer satisfaction rating was originally stable at 50%. In training, employees learned some skills to improve customer satisfaction and their ratings improved. However, no matter how effective the training, it is difficult to consistently satisfy more than 70% of customers. This is the second limiting factor. It is a trademark characteristic of horizontal asymptotes that they represent some limiting factor.

Measuring How Fast Computers Improve

You have seen how computers have been constantly improving, but that the improvements are slowing down in more recent years. Looking at how computers improve at particular instants, or in particular years, is a great way to see more details on how computers improve in the short-term. That is the big difference between average and instantaneous rates of change. Average rates of change look at how things change over a period of time while instantaneous rates of change look at how things change at a particular instant. With that in mind, take a look at how computers have been improving at some particular years. CPU speed has been increasing exponentially since the 1950s but it hit a bottleneck since around 2005. That is, computers have been getting closer and closer to a natural limitation in processing speed, based on the speed of electrons. The speed of new CPUs, in megahertz (MHz), released by Intel can be modeled by the following logistic function: s(t)=50001+20000e−0.29t where t is the number of years since 1970. The following is the function's graph: Point A (27, 558.5) implies that the speed of new Intel CPUs was approximately 558.5 MHz at the beginning of 1997. The graph also shows the instantaneous rate of change at this point as a red line. Notice that the slope of the red line is 143.87 MHz per year. This implies that, at the beginning of 1997, the speed of new Intel CPUs was increasing at 143.87 MHz per year. Remember that an instantaneous rate of change measures how two variables are changing with respect to one another at a particular instant. Here you are measuring how the CPU speed changed per year at the beginning of 1997, which is the particular instant you are interested in. Notice the red line, which touches the function's curve at point A. It is not crossing the curve at two points, as it would when you calculate the average rate of change between two x-values. If a line touches a function's curve at one point, the slope of the line tells you the function's instantaneous rate of change at that point. In the graph, the slope of the line touching A (27, 558.5) is given as 143.87 MHz per year, which implies that the speed of new Intel CPUs was increasing by 143.87 MHz per year at the beginning of 1997. Had this momentum continued, the speed of new Intel CPUs would have reached 558.5+143.87=702.37 MHz one year later, at the beginning of 1998. The point (28, 702.37) is on the red line. However, by examining the graph, you can see the function's value at x = 28 is larger than 702.37 because the corresponding point on the function is located higher than (28, 702.37). This implies that the speed of Intel's new CPUs increased faster in 2017 than the trend displayed at the beginning of 2017. This is probably because, in 2017, the pace to develop new CPU technology was becoming faster than before. Compare this to the situation at point B. The blue line touches the function's curve at point B (38, 3766.71), which means that in 2008, processors operated at a speed of about 3,766.71 MHz. This is actually about 1,000 MHz above the true value in 2008 because models are never perfect. There will be more on that in just a bit. The slope of the line touching that point is given as 269.44 MHz per year, which implies that, at the beginning of 2008, the speed of new Intel CPUs was increasing at 269.44 MHz per year. At t = 39, the function's value is lower than the blue line's value. This implies that the speed of new Intel CPUs did not increase by 269.44 MHz in 2008. That is probably because the technical challenges to increase CPU speed were greater than expected. How do instantaneous rates of change vary over time in this scenario? To explore that, move points A and B in the following applet and see different values of the function's instantaneous rate of change as the points move. As you explore the applet, try to understand that a function's instantaneous rate of change at a point is the same as the slope of the line touching the function's curve at that point. You may be wondering why the rate of change of this model is off from the true CPU processing powers available in 2008. Keep in mind that models are just tools to make an educated guess on what is going to happen in the future or what happened in the past. If you see that a model is not in line with data points you know, you should be skeptical about using the model. At the same time, you already know this CPU model is overestimating values for years after about 2004, so you could use this model as an overestimate for future values. Knowing that the values this model produces are overestimates still helps paint a picture of what the "most ideal" future might be in terms of CPU processing power. For this question, refer to the zoomed-in view of the previous graph, above. The instantaneous rate of change for point A at (30, 1153.7) is 257.48 megahertz (MHz). How do you interpret that instantaneous rate of change? The speed of new Intel CPUs was increasing at 257.48 MHz per year at the beginning of 2000. The speed of new Intel CPUs was increasing at 257.48 MHz per year at the beginning of 2000. For this question, refer to the zoomed-in view of the previous graph, above. The instantaneous rate of change for point A at (30.93, 1409.78) is 293.56 megahertz (MHz). How do you interpret that instantaneous rate of change? The speed of new Intel CPUs was increasing at 293.56 MHz per year in late 2000.

How Polynomials Grow

You have seen how graphs can be used to determine which polynomials grow faster in the long term. Another way to tell that comes from looking at the leading term of the polynomials. The leading term of a polynomial is the term with the largest exponent on the variable.This would also be the same term that determines the degree of the polynomial. Below are some example polynomials, their associated leading terms, and their degrees. PolynomialLeading Term Degree F(x)=5x2+3x−75x22G(a)=−10.5a9+7a5−3.5a2-10.5a99H(z)=5.4z2−25.4z22 Why care about leading terms? It is because with polynomials, the term with the highest power ends up "dominating" what the polynomial does as your variables get larger and larger. That is the same logic for the degree of a polynomial; the degree indicates the most important exponent for the particular polynomial. Consider an example of how the leading term dominates the polynomial for large variable values. Consider the function F(x)=5x2+3x−7. As the x-values get very large, 5x2 will be much larger than the 3x or the -7, which is why 5x2 is the leading term. For example, if x = 1000, you get the following values for the various terms in F(x)=5x2+3x−7: TermValue5x25(1,000)2=5(1,000,000)=5,000,0003x3(1,000)=3,000-7-7 As you can see, the 5x2 term takes on a much larger value than the other terms. This is why the leading coefficient really does "lead the way" for the polynomial when the variables get large. This means the polynomial with the larger leading term will always grow faster in the long term. For example, say you are comparing the polynomial F(x)=5x2+3x−7 to A(x)=4x2+1000x−2. The leading terms can be compared to know that the function F(x) will grow faster in the long term. That is because 5x2 will grow faster than 4x2. What about G(a)=−10.5a9+7a5−3.5a2 compared to B(a)=-11.5a9+8a8-2? In this case, the leading terms for the G and B are -10.5a9 and -11.5a9, respectively. Do not forget the negatives here—the function B(a)will decrease faster in the long term compared to G(a). Finally, what about H(z)=5.4z2−2 compared to C(z)=3z3−2? The leading terms here are 5.4z2and 3z3. Notice that these leading terms do not have the same degree. The 3z3 term will outgrow the 5.4z2 because the z3 has a higher exponent than the z2. You can also think of this as the 3z3 term has more "power" than the 5.4z2 term. Two computer scientists, Aleck and Brenda, are comparing algorithms to see which has the shorter runtime in the long term. The runtime for Aleck's algorithm is modeled by the function A(s)=6s2 while Brenda's algorithm is modeled by the function B(s)=5s2+3s+1. In the long term, which algorithm will have the longer runtime and why? Aleck's will have the longer runtime because 6s2 will outgrow 5s2. The leading term for the polynomial modeling Aleck's algorithm's runtime is 6s2, while the leading term for the polynomial modeling Brenda's algorithm's runtime is 5s2. 6s2 and will grow faster in the long term, meaning Aleck's algorithm will have a longer runtime overall. Lesson Summary As you wrap up this lesson, keep in mind that you will encounter some other functions in later units that will have very different long-term behavior than polynomials do. For now, make sure you understand how to determine long-term behavior for polynomial functions. Here is a list of the key concepts you learned in this lesson: Instantaneous rates of change are good indicators of what is happening in the long term for variables. You can compare two functions' instantaneous rates of change to see which is growing more quickly. Polynomials are well suited to model data that has "turns." However, it can be dangerous or misleading to interpret long-term trends for data that has many "turns." One way to determine which polynomial will outgrow the other is to compare the leading terms. Polynomials with higher degrees will outgrow polynomials with smaller degrees. For polynomials with the same degree, the leading term with the larger coefficient tells you which one grows faster.

Possible Outliers, Interpolation, and Extrapolation

You have seen how outliers affect regression models, both in terms of the equation of a regression model and its graph. Outliers affect interpolation and extrapolation values, as well. The following graph is a model for the wolf population in a large forest. The forest rangers are trying to model the population to see what the maximum capacity is for the wolf population in the forest. Using a logistic regression, they found that the maximum capacity for the wolves in the forest was about 175 wolves. That said, they also noticed a spike in the data for 2010. The spike at point K in this graph was considered a possible outlier, so the forest rangers investigated that data point further. The rangers found out that wolf population surged in 2010 for unknown reasons, but they confirmed that the data was accurate. Since the point was not truly an outlier, it remains in the data set. Still, the rangers wondered how this spike at point K affected the regression model. The rangers decided to run a regression model without point K just to understand its impact. The next graph is the regression without point K. The rangers noticed that the maximum capacity of the forest without point K was estimated at about 167 wolves, a difference of about 8 wolves from the previous prediction. They then wondered how the interpolation and extrapolation values might differ between the two models. Next is an applet where you can explore this. How different are the two functions at predicting interpolation and extrapolation values? Do you notice any general differences in the two models? As you can see, most of the time, the interpolation values for the second model are less than they are for the first model. There is a brief interpolation interval from about t = 0 to t = 2, where the opposite is true. In general, though, the interpolation values for the two models are pretty similar. As for extrapolation values, the second model is noticeably smaller than the first model as the rangers looked to the future. In the context of wolves, this may not seem like a substantial difference. However, if this were a model predicting the revenue of a company in thousands of dollars, the context of this difference might shift quite a bit. In general, the extrapolation values vary a lot more between the two models compared to the differences in the interpolation values. Even potential outliers can shift both interpolation and extrapolation values. Usually, extrapolation values are much more sensitive to potential outliers; that is, they shift around more. This is another reason to be conservative when using extrapolation. Keep in mind that with a real data set, only calculate a new model without a data point it if it truly is an outlier. The wolf population example, shown with point K removed, was purely for demonstration purposes.

Fourth-Degree Polynomials in Applications

You have seen how quadratic and cubic polynomial functions are used to model data in real-life situations. The more changes in direction of a graph, based on its data set, the higher the degree of the polynomials needed. Now you will see how a 4th-degree polynomial is used to model the number of customers at a restaurant. This example makes it clear why a 3rd-degree polynomial would not work for this situation. Consider this next example: Scarlet Dragon Chinese Restaurant is a popular lunch spot. With recent price fluctuations in food, the owner knows she cannot afford to make poor predictions about the number of customers each day, because that would cause food waste. The average number of customers at Scarlet Dragon Chinese Restaurant every day can be modeled by this function: c(t)=−0.2t4+4t3−26t2+63t where t is the number of hours since 10:00 a.m., when the restaurant opens. The following graph depicts this function: The number of customers has two peak hours, around 12:00 p.m. and then again around 6:15 p.m. Due to those two peaks, the function needs to "turn" 3 times, at t = 2, t = 4.5 and t = 8.25. Examine the following graphs of a few polynomials: Degree of Polynomials Maximum Number of Turns in Graph 1 0 2 1 3 2 4 3 n n-1 As the data has more and more turns, the polynomial needs to have higher and higher degrees to model the data correctly. The simplest polynomial that can produce n−1 turns in the data is a degree n polynomial. Josiah saves $250 in the bank every month. Which type of polynomial function best models the amount of money in his account? The amount of money keeps increasing at a constant rate and there are no turns. A linear model should be used.1st-degree polynomial An object is launched into the air and eventually fell back to the ground. The object's height can be modeled by which type of polynomial? 2nd-degree polynomial The data has one turn, and a 2nd-degree polynomial's graph also has one turn, because the object is described as going up and then coming back down (which is one change in direction). Lesson Summary In this lesson, you learned how polynomial functions can model real-life scenarios, such as server usage, in ice cream shops, for game publishers, and in restaurants. Here is a list of the key concepts in this lesson: A first-degree polynomial is a linear function and is of the form f(x)=ax+b. A second-degree polynomial is a quadratic function, is of the form f(x)=ax2+bx+c, and has its independent variable raised to the second power, higher than a linear function. A third-degree polynomial is a cubic function, is of the form f(x)=ax3+bx2+cx+d, and models the data with more than one turn, or curve, in the data. Polynomials of degree 4 or higher can simply be referred to as fourth-degree polynomials, fifth-degree polynomials, etc. The higher a polynomial's degree, the more turns the polynomial's graph can have. If the data or a situation has n−1 turns, then an nth degree polynomial should be used to model the data or situation. When calculating input-outputs for polynomials, make sure that you simplify exponents first, then multiplication and division, and finally addition and subtraction. This process is referred to as following the order of operations.

The Correlation Coefficient

You have seen how the best-fit line describes the way the points of data in a scatterplot trend, but how well does the best-fit line fit the data mathematically? The good news is that there is a numerical measure that tells you how closely the data values in a scatterplot follow the path of a straight line; this measure is called the correlation coefficient. The correlation coefficient, r, is a number between -1 and 1 that measures the strength and direction of a linear relationship. The closer r is to 1 or -1, the stronger the linear relationship. The closer the correlation coefficient is to 0, the weaker the linear relationship. If the data trends upward from left to right, r will be positive, and if the data trends downward from left to right, rwill be negative. It is rare to see r = 1 or r = -1, as these correlation coefficients indicate a perfect linear relationship, which almost never happens with real-world data. The correlation coefficient is referred to as the r-value. In addition to using r, you will also use the coefficient of determination to see how well a function fits, or models, a data set. The coefficient of determination is written as r2 and sometimes referred to as the r2-value. The coefficient of determination is a number between 0 and 1, with values closer to 1 indicating a strong fit and values closer to 0 indicating a weak fit. Another way of thinking about the coefficient of determination is that it gives an idea of how big a difference you can expect between the data points (or real-world values) and the values predicted by the model. Please note that although this lesson concerns linear relationships, the coefficient of determination is used not only for linear relationships but for non-linear relationships as well. The following graph shows an example of a linear correlation and the associated r-value. See if you can calculate the associatedr2-value. You should have found that r2=(0.85)2=(0.85)(0.85)=0.7225. You might be wondering why you would use the r2-value in addition to the r-value. The r-value provides information about the strength and direction of a linear relationship while the r2-value is the appropriate measure for determining how well a particular function fits, or models, the data. We will use the four characterizations in the table below. Using this table, how would you characterize the model above: strong, moderate, weak, or no model? r2-valueCharacterization0.7 ˂ r2 ≤ 1 strong model / strong correlation0.3 ˂ r2 ≤ 0.7 moderate model / moderate correlation0 ˂ r2 ≤ 0.3 weak model / weak correlation0 = r2no model / no correlation Since the r2-value above was 0.7225, this means the function above was a strong model for the data. The following graph shows another scatterplot with its model and r-value. How would you characterize this model: strong, moderate, weak, or no model? The correlation coefficient suggests that the best-fit line is a weak fit, since r2=(−0.48)2=(−0.48)(−0.48)=0.2304. This makes sense; the data trends downward, but it is really all over the place. The data is so sparse and scattered about that the best-fit line does not fit the data very well. On the other hand, you can see that the points in this next scatterplot very clearly move upward and to the right from one to the next: The points do not all fall in a single line, but they fall more closely along their best-fit line than in the other scatterplots you have looked at. The correlation coefficient of this scatterplot is r = 0.93. Using the r2-value, you can see this is a strong fit: r2=(0.93)2=0.8649. Note that if the points on the scatterplot trend upward, the correlation coefficient, r, will be positive. If they trend downward, then r will be negative. Suppose a scatterplot shows a linear relationship with a correlation coefficient of 0.3744. What could be concluded about this scatterplot? The points must be spread out to have such a weak correlation, but they must also trend upward on average, since that correlation is positive. Previousquestion Which of these numbers is most likely to be the correlation coefficient? The line is close to all the points and trends downward, so it should have a strong negative correlation. Lesson Summary In this lesson, you encountered a couple of important tools: a scatterplot and a best-fit line, sometimes called "a line of best fit." These tools are used often in data analysis in many fields. Here is a list of the key concepts in this lesson: A scatterplot represents individual points of data that have been gathered. A best-fit line is used to describe the trends in the data of a scatterplot, and it can be used to predict more data points for the plot. The correlation coefficient, r, tells you how strong a linear relationship is and how closely the data values in a scatterplot fall to a straight line. The correlation coefficient also indicates whether the data trends upward or downward. The coefficient of determination, r2, tells how well a particular function fits, or models, the data in a scatterplot.

Optimizing Winter Gear Sales

You have seen how the two ways that concave up and concave down can occur. You will start putting that knowledge to use now to identify optimal situations. For the first situation, consider the following function, which models Winter Gear's daily revenue, in thousands of dollars, in 2010 and 2011. Focus on the segment from point A to point B first. The function shows that revenue was concave down during this segment, or increasing at a slower and slower rate. However, would it be preferable for this section of the graph to be concave up from the perspective of Winter Gear's management? Here is an alternate graph that shows how this might look: If the segment of the graph from point A to point B were concave up, it would mean that sales were increasing at a faster and faster rate. That would certainly be preferable from a sales and revenue point of view. However, the fact is that this particular segment of the graph is concave down, which means that sales and revenue were increasing but at a slowing rate. This trend changed after point B, and revenue started decreasing faster and faster, placing Winter Gear in a less advantageous financial position. Of course, Winter Gear's management would prefer a concave-up segment from point A to point B, reflecting faster and faster increases in revenue. In this scenario, zoom into the segment from point B to point C. In reality, this segment of the curve was concave down. It is represented by the red portion of the curve between point B and point C.

Comparing Two Installation Plans

You have seen how to compare various rates of change at different times or over different intervals. Now you will turn to comparing situations by how the variables change in the long term. Consider this example: Highcrest needs to replace the PCs their employees use. To be sure Highcrest follows the best possible process for this PC replacement project, the company opened a bidding system and eventually winnowed the contenders down to two bidders, which it refers to as Plan A and Plan B. Management needs to select one of the plans. The number of PCs installed during the replacement period for Plan A and Plan B can be modeled by the following functions a(t)=82361+2059e−0.35t−4, b(t)=82361+2059e−0.25t−4. The following graph depicts these functions. Since both bidders started from 0 computers replaced when t = 0 and ended at 8,232 computers, the average rate of change from the project's starting day to the ending day for both bids are the same. For both functions, at the left end and right end of the curve, the instantaneous rate of change is close to 0, indicating very little change. This is true for all logistic functions. However, the instantaneous rates of change are different in the middle sections of the two functions. Looking at the graph, notice that point A has the largest instantaneous rate of change for a(t), and point B has the largest instantaneous rate of change for b(t). Since the line touching a(t) at point A is steeper than the line touching b(t) at point B, the largest instantaneous rate of change of a(t) is greater than that of b(t). By the graph, you can also see a(t) grows faster than b(t). Why does this happen? The two equations are very similar; you can check that with the equations here: a(t)=82361+2059e−0.35t−4, b(t)=82361+2059e−0.25t−4. However, notice that the exponents are different. For Plan A, the exponent is −0.35t; for Plan B, the exponent is -0.25t. The negative here just means that these two logistic functions are growing, which is counterintuitive. Think about that for a moment: The negative exponent is an indication that the function is increasing, not decreasing. However, what is more important in this case are the actual numbers in front of the independent variable, like 0.35 and 0.25. It is actually these numbers that determine the steepness of a logistic function. You can think of these numbers as an indicator of the rate of growth: the bigger the number, the greater the rate of growth. Of course, if a function has a greater rate of growth, then the instantaneous rates of change will be steeper for that function, as well. For Highcrest Realtors, all of this means that the bidder for Plan A will get more computers installed faster, then do its intensive testing, while the bidder for Plan B spends more time upfront on testing, then focuses on the computer installation. Which is better? It really depends on the company's needs, but at least now management can rigorously compare the two plans. Lesson Summary In this lesson you learned how numbers in a logistic function's equation affect the function's behavior. Here is a list of the key concepts in this lesson: For a logistic function, the number in the exponent indicates how quickly a logistic function increases or decreases. In a logistic function, positive exponents mean the quantity is decreasing, while negative exponents mean it is increasing. For two logistic functions that are equivalent except for their exponents, the function with the larger magnitude coefficient in the exponent will grow or decrease faster. Two logistic functions that are equivalent except for their exponents will have similar average rates of change over large intervals; in fact, the instantaneous rate of change will tend toward zero as the independent variable increases.

Visual Interpretations of Average Rates of Change

You have seen how to interpret average and instantaneous rates of change so far. Now you will look more at how these concepts are represented visually on a graph. This will allow you to more easily compare situations in later lessons. Start with this example: Pinnacle and Regis are internet service providers (ISPs). Both companies have been trying to decrease their numbers of dial-up customers so that they will no longer have to support an outdated service model in addition to a broadband service model. The model for each company, P for Pinnacle and R for Regis, can be modeled by these logistic functions P(t)=121+0.23e0.3t, R(t)=11.51+0.23e0.4t, where t is the number of years since 2000. The following shows the two functions graphed: [The graph has Years Since 2000 plotted on the x axis and Number of Customers in Thousands plotted on the y axis and shows two curves. The curve labeled P of t equals 12 over left parenthesis 1 plus 0.23 times e to the power of 0.3 times t right parenthesis slopes downward through the first quadrant through (6, 5) and (13, 1), falling almost horizontally above the x axis. The curve labeled R of t equals 11.5 over left parenthesis 1 plus 0.23 times e to the power of 0.4 times t right parenthesis slopes downward through the first quadrant through (6, 3.2), and approaches the x axis at x equals 16.]©2018 WGU, Powered by GeoGebra Examining this graph, you can see that Regis had fewer customers with dial-up service in 2000 compared to Pinnacle but not by much. If you plug t = 0 into the functions P and R, you can see that the number of customers with dial-up service for both Pinnacle and Regis was 9,760 and 9,350 dial-up customers, respectively. Remember that Pinnacle had a company goal in 2002 to retain fewer than 2,000 dial-up customers by 2010. Regis had a similar goal. You can compare the two companies' average rates of change to see how the two compared at various points along the way to their goals. For example, looking at the graph, can you estimate which company did a better job at reducing the number of customers with dial-up between 2002 and 2010? See if you can visually estimate the average rate of change for the two companies. Which had a steeper average rate of change over this time? Use the following applet to check your estimate on what the average rates of change actually are. In the applet, you should have set the time values to t = 2 and t = 10 for both companies. Visually, you should be able to see that the average rate of change for Regis is slightly steeper than the average rate of change for Pinnacle. The actual values of the average rates of change were -0.79 for Pinnacle and -0.84 for Regis. These values confirm your visual estimate that Regis was decreasing the number of dial-up customers by about 840 each year between 2002 and 2010 while Pinnacle was decreasing at 790 dial-up customers per year. What does this tell you about the management of the two companies? While Regis had fewer dial-up customers to begin with, Regis also had superior methods for getting their customers to move away from the dial-up option. Perhaps Pinnacle should adopt some of Regis's policies; maybe then Pinnacle would have met its goal of fewer than 2,000 customers retaining dial-up in 2010. In working through this example, you learned that to calculate or estimate an average rate of change, you need to look at a line which crosses a function at two points. You can use the slope formula to calculate an exact average rate of change.

Putting It All Together

You have seen how to visually identify instantaneous rates of change and interpret them. Now you will look at comparing two instantaneous rates of change to see which one may be optimal in a certain context or situation. Consider the following example: Better Hires is always looking for companies interested in advertising openings on their website. Nadia, the manager of the marketing department, has a meeting with the board of directors to show her team's performance for the previous year in the form of a model she has created. The function h(x)=18×1.03x models the number of client leads found after x weeks. She graphed the function as seen in the following graph. To describe the growth last year in her team's performance, Nadia used instantaneous rates of change. She started with the team's performance near the beginning of the year at week 15, or x = 15, and compared it with their performance near the end of the year at week 50, or x = 50. You can estimate the instantaneous rates of change for week 15 and week 50 using the graph below. You can easily see the difference between these two slopes, which show the growth Nadia's team has achieved over the year. The slope is much steeper for x = 50, indicating that performance was improving much faster at that point than at x= 15. To quantify the growth, Nadia calculated the instantaneous rate of change at these two weeks and finds the rates 0.828, when x = 15, and 2.332, when x = 50. This means that at the start of week 15, the team was producing 0.828 more leads per week, whereas the team was producing 2.332 leads per week in week 50. The rates of change are helpful from a management standpoint because they give a way to compare performance at two different times of the year. Moreover, these rates also show that Nadia is leading her team not to only stay on top of their workload, but to increase production, since the number of leads per week was growing so quickly toward the end of the year. The manager has to present her team's performance. The following graph shows the number of leads per week versus time, measured in weeks. How should the manager describe the overall trend of the instantaneous rate of change from week 0 to week 40? How should the manager describe the overall trend of the instantaneous rate of change from week 0 to week 40? The instantaneous rates of change continually increase from week 0 to 40, meaning that the team is getting more and more leads per week.Every week the instantaneous rate of change increases, which can be seen in the slope of the line that touches each successive point. Lesson Summary In this lesson, you had quite a bit of practice in interpreting instantaneous rates of change and looking for an optimal solution for given situations, such as the ones involving Better Hires, Campbell Computers, and the state of California. Here is a list of the key concepts in this lesson: When trying to decide which rate of change to use, start by graphing the function. After graphing the function, decide which rate of change is greater. If the amount is growing, the greater rate of change is the larger number. If the amount is decreasing, then the number closer to 0 is the greater rate of change. Decide if a greater rate of change is optimal in the situation. Make this decision by determining if a greater rate of change is helpful or unhelpful. The point on the function with the line with the least, or most negative, slope for the line drawn through it is x = 0. Why does the design team not use this point? The point x = 0 does not actually make sense in this question. There is no way for a computer to not have a processor. Really, the domain for this function should be x is greater than or equal to 1. The model's equation does not have this limitation on it though, so this x = 0 point occurs on the graph even if it does not make sense in reality.

What It Means to Be a Best-Fit Function

You have seen that a logistic function was a better choice than an exponential function in that last scenario, but how can you tell if the best-fit line fits the data mathematically? To determine how well the line fits, examine the coefficient of determination. The coefficient of determination is represented by r2, so sometimes the coefficient of determination is referred to as the r2-value. There are actually three categories to judge how strong a particular function is at modeling data: strong, moderate, and weak. In the upcoming table, you will see some guidelines on how to judge the strength based on the coefficient of determination, or the r2-value. r2 −valueCharacterization 0.7≤r2≤1 strong model / strong correlation 0.3≤r2<0.7 moderate model / moderate correlation 0<r2<0.3 weak model / weak correlation r2=0 no model / no correlation For the exponential function, the coefficient of determination was r2 = 0.1595; for the logistic function, the coefficient of determination was r2 = 0.9954. The logistic function is a better choice than an exponential function here since the logistic model is a strong model while the exponential model is a weak model. Remember that the LSR algorithm calculates the best function whenever it is used to calculate a regression function. Said another way, of all exponential functions that exist, the exponential function, h(x)=0.9856e0.1887x is your best choice to model the virus data mathematically. Similarly, the logistic function g(x)=29.69621+101.205e−0.3994x is the best choice, mathematically, among all logistic functions to model the virus data. That is why these are called the best-fit functions. In most cases, you must decide which type of function to choose based on the given data. This is why you need to be familiar with all types of functions and ready to use the appropriate one for different data sets. In this lesson, you tested two possible functions as the function of best fit for a set of data related to the spread of a virus. You learned that exponential functions are seldom best for such data sets but logistic functions usually work. You also used the coefficient of determination to confirm a function's strong or weak fit to the data. Here is a list of the key concepts in this lesson: You can choose different types of functions to model a given set of data, depending on the data's shape. S-shaped data will generally be best modeled by a logistic function. The coefficient of determination, or the r2-value, is the best tool for evaluating how well a regression function fits the data. r2 −valueCharacterization 0.7≤r2≤1 strong model / strong correlation 0.3≤r2<0.7 moderate model / moderate correlation 0<r2<0.3 weak model / weak correlation r2=0 no model / no correlation When you use the LSR algorithm to calculate a logistic regression, the resulting logistic function will be the best-fit function, meaning no other function will have a higher coefficient of determination, or r2-value.

Given the graph of a logistic function, translate solutions to logistic equations into real-world meaning.

You have spent some time now with the Sunrise Sky Spa and Retreat Spa situation. The two companies are competing for members in a small city. You have worked with their functions and their graphs. In this lesson, you will learn how you can solve equations related to those functions and graphs. Solving these equations will lead to deeper understandings for the owners of Sunrise and Retreat. In this lesson, you will continue estimating input values from output values, but you will also take it a step further by identifying and interpreting solutions to equations in several different scenarios including that of Sunrise Sky and Retreat Spas. At first, you will use technology to aid in solving logistic equations and then you will transition to using just a graph to solve logistic equations.

Visual Interpretations of Instantaneous Rates of Change

You just saw that Regis consistently outperformed Pinnacle in terms of reducing the number of customers with dial-up internet. You did that analysis using average rates of change. How might the two companies compare when you analyze instantaneous rates of change? It will actually be a little different, since instantaneous rates of change look at particular instances. For example, use the following applet to look at the general trends between the instantaneous rates of change for Pinnacle versus Regis. You may have noticed that Regis consistently outperformed Pinnacle in terms of the instantaneous rate of change until halfway through 2006 (t = 6.5). It was at this time that the two companies had the same instantaneous rate of change. What does that mean? Having the same instantaneous rates of change means that Pinnacle was finally able to catch up with Regis in terms of decreasing the number of dial-up customers per year. Does this mean that Pinnacle was able to catch up in real numbers of customers? Well, no. You can see that in the times after t = 6.5, Pinnacle consistently had better instantaneous rates of change, but it was a matter of "too little, too late." Regis had already made so much progress that Pinnacle was never able to catch up, even when Pinnacle's instantaneous rates of change surpassed Regis's. See if you can visually estimate some of these instantaneous rates of change. For example, can you see that at t = 2, Regis had the optimal rate of change over Pinnacle? Which had the optimal rate of change at t = 10.5? The following graphs compare these two time values with the regular graph for you to visually estimate the optimal instantaneous rate of change. Then you will see a graph with the instantaneous rates of change at particular instants. [The first graph plots Years Since 2000 on the x axis and Number of Customers in Thousands on the y axis and each show two curves. The curve labeled P of t equals 12 over left parenthesis 1 plus 0.23 times e to the power of 0.3 times t right parenthesis slopes downward through the first quadrant through (6, 5) and (13, 1), falling almost horizontally above the x axis. The curve labeled R of t equals 11.5 over left parenthesis 1 plus 0.23 times e to the power of 0.4 times t right parenthesis slopes downward through the first quadrant through (6, 3.2), and approaches the x axis at x equals 16. The second graph plots Years since 2000 on the x axis and Number of Customers in Thousands on the y axis. A curve labeled, R of t equals 11.5 over left parenthesis 1 plus 0.23 times e to the power of negative 0.4 times t right parenthesis slopes downward through the first quadrant through the point (2, 7.61), and approaches the x axis at x equals 16. A dotted line slopes downward from the second quadrant and intersects the curve at the point (2, 7.61). Another curve labeled P of t equals 12 over left parenthesis 1 plus 0.23 times e to the power of negative 0.3 times t right parenthesis slopes downward through the points (2, 8.46), and then falls almost horizontally above the x axis. Another dotted line slopes downward from the second quadrant and intersects the curve at the point (2, 8.46). Text on the second graph reads: Instantaneous Rate of Change Regis equals negative 1.03. Instantaneous Rate of Change Pinnacle equals negative 0.75. ]©2018 WGU, Powered by GeoGebra t = 2 Here, Regis has the steeper, more negative, instantaneousrate of change, so it is doing an optimal job in 2002. (remember to click on the graphs if you need to enlarge them) [The first graph plots Years Since 2000 on the x axis and Number of Customers in Thousands on the y axis and each show two curves. The curve labeled P of t equals 12 over left parenthesis 1 plus 0.23 times e to the power of 0.3 times t right parenthesis slopes downward through the first quadrant through (6, 5) and (13, 1), falling almost horizontally above the x axis. The curve labeled R of t equals 11.5 over left parenthesis 1 plus 0.23 times e to the power of 0.4 times t right parenthesis slopes downward through the first quadrant through (6, 3.2), and approaches the x axis at x equals 16. The fourth graph plots Years since 2000 on the x axis and Number of Customers in Thousands on the y axis. A curve labeled, R of t equals 11.5 over left parenthesis1 plus 0.23 times e to the power of negative 0.4 times t right parenthesis slopes downward through the first quadrant through the point (19.5, 0.7), and approaches the x axis at x equals 16. A dotted line slopes downward from the second quadrant and intersects the curve at the point (17.5, 0.7). Another curve labeled P of t equals 12 over left parenthesis 1 plus 0.23 times e to the power of negative 0.3 times t right parenthesis slopes downward through the points (10.5, 1.88), and then falls almost horizontally above the x axis. Another dotted line slopes downward from the second quadrant and intersects the curve at the point (10.5, 1.88). Text on the second graph reads: Instantaneous Rate of Change Regis equals negative 0.26. Instantaneous Rate of Change Pinnacle equals negative 0.48. A line with a closed dot is shown on the graph labeled Time sub 1 equals 10.5. ]©2018 WGU, Powered by GeoGebra t = 10.5 Here, Pinnacle has the steeper, more negative, instantaneousrate of change, so it is doing an optimal job halfway through 2010. (remember to click on the graphs if you need to enlarge them) In working through this example, you learned that to estimate an instantaneous rate of change, you need to look at a line which touches a function's curve at one point. You also learned that when comparing two average or two instantaneous rates of change, the greater rate of change is whichever line is steeper. Note that you need to be able to draw such lines, like those touching the function's curve at one point in your head and compare their steepness, or slope. Lesson Summary In this lesson, you compared the average and instantaneous rate of change across different situations or at different times. You then identified the optimal situation or time. Here is a list of the key concepts in this lesson: Visually, the steeper rate of change is the greater magnitude rate of change. This is true for both average and instantaneous rates of change. You can visually compare two average rates of change by seeing which average rate of change will be steeper. You can also visually compare two instantaneous rates of change by seeing which instantaneous rate of change will be steeper.

Given a real-world scenario modeled by a logistic function, translate a given rate of change of the logistic function into real-world meaning.

You may have heard the growing human population on Earth described as an "explosion." Here is why: The rate of growth for the planet's population hit its peak about 1963, but since then, even though the rate slowed, the global population continued to increase. Today, Earth supports over 7.5 billion people with an expectation that the population will reach 9 billion by 2050 (West, 2017). Did you notice how important the rate of change is in that description? In the last unit you looked at instantaneous rate of change and average rate of change for exponential functions. This unit revisits rates of change for logistic functions, and this lesson begins by demonstrating what an average rate of change means when you are working with logistic functions.

Given a real-world situation modeled by a logistic function and a rate of change at a specified x-value, interpret the rate of change in context of the real-world situation.

You may have seen dial-up service in old movies like You've Got Mail with Tom Hanks and Meg Ryan. Actually, as with many things in movies, dial-up did not look too bad in the movies, but in real life, dial-up service was prone to interruptions in service and was very, very slow. Given a choice, few people today would opt for dial-up internet access. Choosing between options is not always as easy as choosing between high-speed broadband access and dial-up. A function can help with this. In the past few lessons, you learned the concepts of average rate of change and instantaneous rate of change, as well as how to interpret them in context. In this lesson, you will learn how to compare them visually, using graphs of functions.

Measuring How a Stock Portfolio Grows

You may have seen how to apply average rates of change to other functions in past lessons. In the example below, you will see how the average rate of change still works for any function, no matter how it looks or behaves. Consider this example: Ron has a stock portfolio and has tracked its performance over the last year. Each month, the portfolio's value depends on the ups and downs of the stock market. Some months, Ron's portfolio's value has grown while in other months, it has declined in value. Examine the following graph to see the monthly changes. Notice that the graph does not fit any of the shapes of the functions you have worked with previously. This happens often with real data. When there is no specific function you can apply to the data, you can still analyze the data using tools like rates of change. For example, depending on which two months you compare on Ron's graph, there is a different average rate of change. Recall that linear functions have only one rate of change that always remains the same; this is because a linear function has the same slope anywhere along it. This is not the case for Ron's up-and-down portfolio balance. Ron's stock portfolio graph has multiple rates of change since the values are changing each month. However, the graph's ups and downs do not prevent you from computing an average rate of change for a portion, or even the entirety, of the data. You know that computing the average rate of change is the same as computing the slope of the line between two points on a graph. It is a constant rate of change over a particular time period. If you want to determine the average rate of change between the second and eighth months of Ron's graph, approximate the points on the graph for month 2 and month 8 and compute the slope. Using the coordinates very close to these months, but rounded for convenience, say (2, 111 837) and (8, 117 209), the slope would be: m=y2−y1x2−x1=117209−1118378−2=53726=895.33 This result means that the average rate of change between the second and eighth months is approximately $895.33 per month. This number, $895.33, implies that between the second and the eighth month, Ron's stock portfolio increased by an average of $895.33 per month. There were ups and downs during those months, but on average the portfolio's value was increasing, and by a specific value, each month. Now examine instantaneous rates of change using the same graph. Between the first and second months, Ron's portfolio made a substantial gain. The function's instantaneous rate of change was $11,200 per month about halfway through month 1. This result implies that at that very moment, the value of Ron's stock portfolio was increasing at $11,200 per month. Notice that this increase is just a short-term trend, not a bankable fact. Ron can use this rate to predict what might happen in the near future, but he must be aware that there could be a big difference between projecting a future value for his portfolio's value and what will actually happen. In this example, the stock market took a downturn, losing value over the next two months instead of continuing to climb. An important caution about instantaneous rates of change: When you use an instantaneous rate of change to predict future values, do not predict too far into the future. Look at a second example involving Ron's portfolio. In the middle of the seventh month, the function's instantaneous rate of change was −$6,800 per month. This number indicates a trend of losing $6,800 per month for the near future. Indeed, the portfolio did lose money for a short time. However, the portfolio's value more than recovered in the next few months, rising above $140,000 at one point. Had Ron based decisions on an instantaneous rate of change of −$6,800 per month, he may have made mistakes. The downward trend was not valid for more than a very short time. This is why you always need to remember that instantaneous rates of change measure how things are changing at an instant. Since stocks frequently change in an instant, this means you should not use instantaneous rates of change to get a grasp on how things might change over a longer period of time. If you want to know how things are changing for more than an instant, you need to use an average rate of change.

Concave Up and Concave Down for Logistic Functions

You may have seen the phrases "concave up" and "concave down" in a previous unit. If you remember those concepts, you will get some practice on applying them to logistic functions. If you do not remember those concepts, you will be guided through applying them to logistic functions here. Get started with this scenario: The function M(t)=1001+510e−0.2t models the percentage of memory occupied by the virus, where t is the number of seconds since the attack started. You can see the function's graph in the following applet. Notice a few points on the function. From point A to point B, the function is increasing, implying that the virus is occupying more and more of the memory. Not only that, but the rate of change from A to B is also increasing, implying that the virus is picking up speed as it devours the computer's memory. As you drag point A along the function toward B, observe the increasing rate of change. After point B, the function is still increasing, implying that the virus is still eating more and more memory. However, the rate of change is slowing down, probably because there is less and less memory available to attack. As you drag point C from the middle part of the function to the right, observe the decreasing rate of change. Review the definition of concave up and concave down. As an easy way to memorize them, the graph of y = x2 is concave up, and the graph of y = -x2 is concave down. On function M(t), from x = 0 to x = 31.17 (which is B's x-value), the function values are increasing faster and faster. On this section, the function is concave up. From x = 31.17 to x = 60, the function values are increasing slower and slower. On this section, the function is concave down. Since logistic functions are always S-shaped, this means that in about half of the logistic functions you will see, the first part of the graph will be concave up and the second part of the graph will be concave down. The other half of the logistic functions you will see will behave just the opposite, with the first part of the graph concave down and the second part of the graph concave up. This gives you two major ways to look at concavity of logistic functions, as described in the following table. [A graph shows a curve that rises along the x axis in the second quadrant to (20, 0), then rises through first quadrant, before turning horizontal at (50, 5000).]©2018 WGU, Powered by GeoGebra Type of Logistic Function: Increasing First Half of Graph: Concave Up (in solid black) Second Half of Graph: Concave Down (in red dashes) [A graph shows a curve that rises horizontally in the second quadrant through (0, 5000) to about (10, 5000), then falls through first quadrant, and approaches the x axis at (40, 0). ]©2018 WGU, Powered by GeoGebra Type of Logistic Function: Decreasing First Half of Graph: Concave Down (in solid black) Second Half of Graph: Concave Up (in red dashes) The point where the black and dashed red portions of the graph meet is called the function's inflection point. In the applet from before, this would be point B. An inflection point is where a graph's concavity changes. The inflection point is where the instantaneous rate of change is the largest or most positive for an increasing logistic function, or most negative for a decreasing logistic function.

Identifying a Graph

You may need to identify the graph of a linear function or situation, or you might already have a graph and want to know which equation corresponds to it. In this section, you will learn how to do both. Carly works for Best Computers. She has a base salary of $1,500 and makes $30 in commission for each computer she sells. Use the function P(c) to model Carly's monthly pay in dollars, where c is the number of computers she sells. Is this graph of P(c) correct? To find out, you first need to find the equation of P(c). The base pay is $1,500, which is the y-intercept. The commission rate is 30dollarscomputer, which is the line's slope. This tells you the linear function will be P(c)=30c+1500. To make sure you have the correct equation, you can locate and calculate some points on the function's graph. The easiest point is always the y-intercept, which is (0, 1500). In a linear function, you also need a second point. For the second point, it is always best to pick an easy coordinate. Pick a point with integer coordinate values, like B (10, 1800). Does this point satisfy the linear function P(c)=30c+1500? The point B (10, 1800) implies when the input is 10, the output is 1,800. You can substitute c = 10 into P(c), and check whether the output is 1800: P(10)=30(10)+1500=300+1500=1800. Is the point (50, 4500) on P(c)=30c+1500? No, because P(50)=30(50)+1500=1500=3000 Lesson Summary In this lesson, you learned the meaning of slope and y-intercept in a linear function. In a graph, you can easily identify the y-intercept, but you must use the formula slope=riserunto calculate the slope. When you know two points, you can use the slope formula slope=y1−y2x1−x2. Here are the skills you learned in this lesson: Given a line's graph, you can use a slope triangle to calculate the line's slope. Using any two coordinates on the graph of a line, you can use the slope formula to calculate the line's slope. For any line, any two slope triangles will give you the same slope. For any line, any two points on a line would give you the same slope. To know if you have identified the correct graph for a linear function, identify two points on the line (the y-intercept is one of the easiest points to work with), and then verify whether the two points satisfy the linear function by substituting in the input value to see if you get the predicted output value.

Given a real-world scenario, the graph of a logistic function modeling the scenario, and two x-values, interpret why one x-value's rate of change is optimal based on real-world context.

You want your computer to be really fast, right? Everyone does. Several factors play into a computer's speed, and an important one is random access memory, or RAM. RAM is often called the computer's "memory," and it is actually the space where your computer works on data, which is then accessed by the central processing unit (CPU). All of these factors can be measured by rates of change and an optimal situation found by comparing rates of change. In this lesson you will compare average and instantaneous rates of change to find optimal solutions over a period of time or at a specific instant. In many cases, you can estimate the optimal solution from looking at a graph, and then you will be able to interpret the meaning of that graphical information.

Based on data from January to December, to find out whether funding for homeless shelters is correlated to the number of homeless people in a downtown area, would you draw a conclusion based on a regression's correlation coefficient, interpolation, or extrapolation? In the following graph, if the correlation coefficient were 0.82, what would your conclusion be? In this example, if the correlation coefficient were 0.62, what would your conclusion be?

You would draw a conclusion based on a regression's correlation coefficient. There is a very strong correlation between the age and annual salary at this company. With 30 data points, a correlation between 0.7 and 0.9 would justify a strong correlation. No conclusion can be made. For correlations less than 0.70, more data or a new approach might be needed.

In the redback spider example, to predict the number of spiders in January 2012, would you draw a conclusion based on a regression's correlation coefficient, interpolation, or extrapolation? In the redback spider example, to predict the number of spiders in January 2014, would you draw a conclusion based on a regression's correlation coefficient, interpolation, or extrapolation? In the redback spider example, how many redback spiders were in the forest on January 15, 2011? In the redback spider example, how many redback spiders will be in the forest in December 2016?

You would draw a conclusion based on the regression's interpolation. You would draw a conclusion based on the regression's extrapolation. For January 15, 2011, the corresponding x-value would be 12.5. Using the function, calculate $p\left(12.5\right)93.75e^{0.09\left(12.5\right)}\approx289$p(12.5)93.75e0.09(12.5)≈289, or that there were approximately 289 redback spiders in the forest at this time. For December 2016, the corresponding x-value would be 72. However, this is an extreme extrapolation value, so no prediction should be made using the model.

The following graph depicts two functions modeling their available memory, in percentage, since 0:00 on a certain day. As a server administrator who would like to see more available memory as soon as possible, which function would you prefer?

You would prefer to see $f(x)$f(x) because its instantaneous rate of decrease frees up memory much more quickly than $g(x)$g(x). You would prefer to see $f(x)$f(x) because its instantaneous rate of decrease frees up memory much more quickly than $g(x)$g(x). hed graph, representing the "preventive care option" would be best in the long term. In this situation, the insurance company has fewer claims to process and the rate at which the claims are increasing is smaller compared to the "standard option."

Which game's webserver had more online gamers at 12:00 p.m. yesterday? Use the following graph to answer this question. Which game's webserver had more online gamers at 11:59 p.m. yesterday? Use the following graph to solve this problem.

Zoo Alive's webserver had more online gamers at 12:00 p.m. yesterday. Glory of Lords's webserver had more online gamers at 11:59 p.m. yesterday.

Making Decisions

amal has been showing so much growth as an employee that you're thinking of opening another store and making him the manager. Where should you locate this new store? If you turn your graph around, you can get some help in making this decision. It now looks like the following graph: This graph represents the inverse Opposite in position, direction, order, or effect. of the first graph, with the independent variable now positioned on the y-axis and the dependent variable on the x-axis. How does this inform your decision about the location for your new ice cream store? Well, you know from your experience that you need to make at least $3,000 per week in sales in order to remain profitable. To make that amount, the outside temperature needs to be about 75 ℉. Where in the United States does the temperature stay at or above 75 degrees for most of the year? If you had access to a location in Butte, Montana; one in Nashville, Tennessee; and one in Mesa, Arizona, which would you pick for your new store, based on this data? Right! Arizona, here comes Jamal. So here is a key point: It's critical to think about what a graph communicates, and not simply rely on having the independent and dependent variables on the x- and y-axes, respectively. The context of the situation matters, as well as the question you need to answer. A tester is testing the number of applications a cell phone can run with a certain amount of memory. If you build a function to model this scenario, for memory size and number of applications, which one is the independent variable, and which one is the dependent variable? Independent Variable: amount of memory Dependent Variable: number of applications A biologist is studying how much fertilizer is needed to make a certain type of plant to grow to a certain height. If you build a function to model this scenario, for plant height and amount of fertilizer, which one is the independent variable, and which one is the dependent variable?

Which lines have the fastest rate of increase? Which line decreases the fastest?

f(x)=3x-100 The slopes for the functions f, g, and h are 3, 1, and 0.5, respectively. This means f(x) has the fastest rate of increase. (x)=−3x−100 Correct! f(x) decreases the fastest.

Examine the following graph of the number of users, in thousands, for a photo app that launched in 2010. Based on this data, which of these years would be an extrapolation value? Which year is an extrapolation?

here is data only from 2011 to 2016, or from t = 1 to t = 6. This means that 2018, or t = 8, is outside the area of known data, making this an extrapolation value. A&B had no data before year 5, so year 3 would be an extrapolation.

1. In the Sunrise Sky Spa scenario, to find when the company had 15,000 memberships, what equation do you need to solve? 2. In the Sunrise Sky Spa scenario, the equation M(t)=15 has two solutions, t≈3.9 and t≈9.15. The following graph depicts M(t). What do these solutions mean in this context? 3. What is the solution to the equation M(t)=21 and how would you interpret these solutions?

m(t)=15 2. In late 2003 and again in early 2009, Sunrise Sky Spa had 15,000 memberships.he equation M(t)=15 is looking for when the company had 15,000 memberships. 3. The solutions would be $t\approx6.2$t≈6.2 and $t\approx7.65$t≈7.65. This means that around February 2006 and again near the end of July 2007, Sunrise Sky Spa had about 21,000 members.

Diminishing Return

proportionally smaller profits or benefits derived from something as more money or energy is investedin it.

You may be wondering why the variable moves around in the two functions above;

sometimes the variable was at the beginning of the right side of the equation and sometimes it came later. Just remember that quantities can move around with respect to addition due to the commutative property. For example, 4+5 is the same thing as 5+4. In terms of these variables, 40,000+0.10x is the same thing as 0.10x+40,000. Don't let subtraction confuse you, though; remember that 4-5 is not the same thing as 5-4. With subtraction, you have to be sure to keep the negative with whatever it is attached to. For example, 4-5 is the same thing as -5+4; they are equivalent here because the negative stayed attached to what it was originally attached to.

In the following graph, identify this function's two asymptotes. y = 60, y = 10

y = 60, y = 10 You only know that eventually the car will only be worth its value in parts. Since this value will likely continue into the future, this makes the asymptote y = 500.


संबंधित स्टडी सेट्स

Unit 3: Capital Gains Tax Shelters for Investment Properties Unit Exam

View Set

WGU CHapter Descriptive stats for a single variable

View Set