6. Latency and Throughput

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

What is the caveat with paying for more computing power to increase the throughput of our system?

Caveat: Although an "easier" option, paying for better computing power does not solve the inherent problem. Even if we have more computing power, there is still an upper constraint on our system/server of how many operations our system can handle. e.g. we have a 1 Gbps network, we upgrade to 2 Gbps network. We still can only handle at most 2 gigabits of data per second. There is still a bottleneck here.

What is a "better" solution for improving the throughput of our system?

Deploying multiple servers to handle requests. Instead of having a single server handle all requests from a client, we can have a network of servers where requests are distributed among the servers.

Latency of a network request

How long it takes for a request to go from a client to the server, and then back from the server back to the client.

Latency of reading data from memory/disk

How long it takes for the server/machine to read data from memory/disk

Are latency and throughput always correlated?

Latency and throughput are not always correlated. Example: let's say we have a system where we have low latency of database operations (read/write), but another part of our system has low throughput, meaning it can only handle X amount of database requests per second. Our low latency (fast) has been cancelled out by our low throughput (slow).

Can we make assumptions about latency based on throughput and vice versa?

No, latency and throughput are not always correlated. Example: let's say we have a system where we have low latency of database operations (read/write), but another part of our system has low throughput, meaning it can only handle X amount of database requests per second. Our low latency (fast) has been cancelled out by our low throughput (slow).

What is the easiest way to improve the throughput of our system?

Pay for better computing power (e.g. machines with more compute power) or pay cloud provider for higher throughput on server.

Throughput

The number of operations that a system can handle properly per time unit. example: the throughput of a server can often be measured in requests per second (RPS or QPS).

Latency

Time it takes for a certain operation to complete in a system, typically used in the context of data going from one system to another. Note: Latency can refer to many different operations, such as latency of a network request, latency of a read/write operation, etc.

Do we want high or low latency?

We want low latency, or we aim to reduce latency in our systems. example: designing a video game, latency is typically defined as the time it takes for data from your computer to be sent to game servers and back. If latency is high, our game will be laggy.


Set pelajaran terkait

HVAC Unit 25 Special Refrigeration System Components Review Questions

View Set

3.6 - 3.8: Atomic Structure Electron Arrangement

View Set

Prep U Chapter 34: Assessment and Management of Patients with Inflammatory Rheumatic Disorders

View Set

Chapter 5 Culture Diversity PrepU

View Set

Unit 4- Principles of Management

View Set