AZ-303 - Module 5 - Implement Load Balancing and Network Security

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

Web Application Firewall Overview

Web Application Firewall (WAF) provides centralized protection of your web applications from common exploits and vulnerabilities. Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities. SQL injection and cross-site scripting are among the most common attacks. Preventing such attacks in application code is challenging. It can require rigorous maintenance, patching, and monitoring at multiple layers of the application topology. A centralized web application firewall helps make security management much simpler. A WAF also gives application administrators better assurance of protection against threats and intrusions. A WAF solution can react to a security threat faster by centrally patching a known vulnerability, instead of securing each individual web application. Supported service WAF can be deployed with Azure Application Gateway, Azure Front Door, and Azure Content Delivery Network (CDN) service from Microsoft. WAF on Azure CDN is currently under public preview.

Azure Load Balancer

With Azure Load Balancer, you can spread user requests across multiple virtual machines or other services. That way, you can scale the app to larger sizes than a single virtual machine can support, and you ensure that users get service, even when a virtual machine fails.

Do Load Balancers support Inbound and Outbound scenarios?

Yes, Load Balancer supports inbound and outbound scenarios, provides low latency and high throughput, and scales up to millions of flows for all TCP and UDP applications.

Network security group assignment and evaluation

Network security groups are assigned to a network interface or a subnet. When you assign a network security group to a subnet, the rules apply to all network interfaces in that subnet. You can restrict traffic further by associating a network security group to the network interface of a virtual machine. When you apply network security groups to both a subnet and a network interface, each network security group is evaluated independently. Inbound traffic is first evaluated by the network security group applied to the subnet, and then by the network security group applied to the network interface. Conversely, outbound traffic from a virtual machine is first evaluated by the network security group applied to the network interface, and then by the network security group applied to the subnet. Applying a network security group to a subnet instead of individual network interfaces can reduce administration and management efforts. This approach also ensures that all virtual machines within the specified subnet are secured with the same set of rules. Each subnet and network interface can have one network security group applied to it. Network security groups support TCP, UDP, and ICMP, and operate at Layer 4 of the OSI model.

Network Security Groups (NSGs)

Network security groups filter network traffic to and from Azure resources. Network security groups contain security rules that you configure to allow or deny inbound and outbound traffic. You can use network security groups to filter traffic between virtual machines or subnets, both within a virtual network and from the internet.

How an application gateway accepts a request

1. Before a client sends a request to an application gateway, it resolves the domain name of the application gateway by using a Domain Name System (DNS) server. Azure controls the DNS entry because all application gateways are in the azure.com domain. 2. The Azure DNS returns the IP address to the client, which is the frontend IP address of the application gateway. 3. The application gateway accepts incoming traffic on one or more listeners. A listener is a logical entity that checks for connection requests. It's configured with a frontend IP address, protocol, and port number for connections from clients to the application gateway. 4. If a web application firewall (WAF) is in use, the application gateway checks the request headers and the body, if present, against WAF rules. This action determines if the request is valid request or a security threat. If the request is valid, it's routed to the backend. If the request isn't valid and WAF is in Prevention mode, it's blocked as a security threat. If it's in Detection mode, the request is evaluated and logged, but still forwarded to the backend server.

Are Load Balancers Physical Instances?

No, Load balancers aren't physical instances. Load balancer objects are used to express how Azure configures its infrastructure to meet your requirements.

Traffic Manager Routing Methods

Priority routing When a Traffic Manager profile is configured for priority routing it contains a prioritized list of service endpoints. Traffic Manager sends all traffic to the primary (highest-priority) endpoint first. If the primary endpoint is not available, Traffic Manager routes the traffic to the second endpoint, and so on. Availability of the endpoint is based on the configured status (enabled or disabled) and the ongoing endpoint monitoring. The Priority traffic routing method allows you to easily implement a failover pattern. You configure the endpoint priority explicitly or use the default priority based on the endpoint order. Performance routing The Performance routing method is designed to improve the responsiveness by routing traffic to the location that is closest to the user. The closest endpoint is not necessarily measured by geographic distance. Instead Traffic Manager determines closeness by measuring network latency. Traffic Manager maintains an Internet Latency Table to track the round-trip time between IP address ranges and each Azure datacenter. With this method Traffic Manager looks up the source IP address of the incoming DNS request in the Internet Latency Table. Traffic Manager chooses an available endpoint in the Azure datacenter that has the lowest latency for that IP address range, then returns that endpoint in the DNS response. Geographic routing When a Traffic Manager profile is configured for Geographic routing, each endpoint associated with that profile needs will have a set of geographic locations assigned to it. Any requests from those regions gets routed only to that endpoint. Some planning is required when you create a geographical endpoint. A location cannot be in more than one endpoint. Weighted routing The Weighted traffic-routing method allows you to distribute traffic evenly or to use a pre-defined weighting. In the Weighted traffic-routing method, you assign a weight to each endpoint in the Traffic Manager profile configuration. The weight is an integer from 1 to 1000. This parameter is optional. If omitted, Traffic Manager uses a default weight of '1'. The higher weight, the higher the priority. ✔️ Additonally, MultiValue routing distributes traffic only to IPv4 and IPv6 endpoints​ and Subnet routing distributes traffic based on source IP ranges.

Application Gateway Routing - Additional Features

Redirection. Redirection can be used to another site, or from HTTP to HTTPS. Rewrite HTTP headers. HTTP headers allow the client and server to pass additional information with the request or the response. Custom error pages. Application Gateway allows you to create custom error pages instead of displaying default error pages. You can use your own branding and layout using a custom error page.

Load Balancer and Remote Desktop Gateway

Remote Desktop Gateway is a Windows service that you can use to enable clients on the internet to make Remote Desktop Protocol (RDP) connections through firewalls to Remote Desktop servers on your private network. The default five-tuple hash in Load Balancer is incompatible with this service. If you want to use Load Balancer with your Remote Desktop servers, use source IP affinity.

Security Rules

A network security group contains one or more security rules. Configure security rules to either allow or deny traffic. Rules have several properties: Name A unique name within the network security group. Priority A number between 100 and 4096. Source or destination Any, or an individual IP address, classless inter-domain routing (CIDR) block (10.0.0.0/24, for example), service tag, or application security group. Protocol TCP, UDP, or Any. Direction Whether the rule applies to inbound, or outbound traffic. Port range An individual port or range of ports. Action Allow or deny the traffic. Network security group security rules are evaluated by priority, using the 5-tuple information (source, source port, destination, destination port, and protocol) to allow or deny the traffic. When the conditions for a rule match the device configuration, rule processing stops. With network security groups, the connections are stateful. Return traffic is automatically allowed for the same TCP/UDP session.

Configure a Public Load Balancer

A public load balancer maps the public IP address and port number of incoming traffic to the private IP address and port number of a virtual machine in the back-end pool. The responses are then returned to the client. By applying load-balancing rules, you distribute specific types of traffic across multiple virtual machines or services.

Azure Front Door - Features

Accelerate application performance Using split TCP-based anycast protocol, Front Door ensures that your end users promptly connect to the nearest Front Door POP (Point of Presence). Increase application availability with smart health probes Front Door delivers high availability for your critical applications using its smart health probes, monitoring your backends for both latency and availability and providing instant automatic failover when a backend goes down. URL-based routing URL Path Based Routing allows you to route traffic to backend pools based on URL paths of the request. One of the scenarios is to route requests for different content types to different backend pools. Multiple-site hosting Multiple-site hosting enables you to configure more than one web site on the same Front Door configuration. Session affinity The cookie-based session affinity feature is useful when you want to keep a user session on the same application backend. By using Front Door managed cookies, subsequent traffic from a user session gets directed to the same application backend for processing. TLS termination Front Door supports TLS termination at the edge that is, individual users can set up a TLS connection with Front Door environments instead of establishing it over long haul connections with the application backend. Custom domains and certificate management When you use Front Door to deliver content, a custom domain is necessary if you would like your own domain name to be visible in your Front Door URL. URL redirection Web applications are expected to automatically redirect any HTTP traffic to HTTPS. This ensures that all communication between the users and the application occurs over an encrypted path. URL rewrite Front Door supports URL rewrite by allowing you to configure an optional Custom Forwarding Path to use when constructing the request to forward to the backend. Protocol support - IPv6 and HTTP/2 traffic Azure Front Door natively supports end-to-end IPv6 connectivity and also HTTP/2 protocol.

Internal and external load balancers

An external load balancer operates by distributing client traffic across multiple virtual machines. An external load balancer permits traffic from the internet. The traffic might come from browsers, module apps, or other sources. An internal load balancer distributes a load from internal Azure resources to other Azure resources. For example, if you have front-end web servers that need to call business logic that's hosted on multiple middle-tier servers, you can distribute that load evenly by using an internal load balancer. No traffic is allowed from internet sources.

Application Gateway components

Application Gateway has several components. The main parts for encryption are the frontend port, the listener, and the backend pool. The following image shows how incoming traffic from a client to Application Gateway over SSL is decrypted and then re-encrypted when it's sent to a server in the backend pool.

Application Gateway

Application Gateway manages the requests that client applications can send to a web app. Application Gateway routes traffic to a pool of web servers based on the URL of a request. This is known as application layer routing. The pool of web servers can be Azure virtual machines, Azure virtual machine scale sets, Azure App Service, and even on-premises servers. Azure Application Gateway can be used as an internal application load balancer or as an internet-facing application load balancer. An internet-facing application gateway uses public IP addresses. The DNS name of an internet-facing application gateway is publicly resolvable to its public IP address. As a result, internet-facing application gateways can route client requests to the internet.

Azure Bastion - Architecture

Azure Bastion deployment is per virtual network, not per subscription/account or virtual machine. Once you provision an Azure Bastion service in your virtual network, the RDP/SSH experience is available to all your VMs in the same virtual network. Exposing RDP/SSH ports over the Internet isn't desired and is seen as a significant threat surface. This is often due to protocol vulnerabilities. To contain this threat surface, you can deploy bastion hosts (also known as jump-servers) at the public side of your perimeter network. Bastion host servers are designed and configured to withstand attacks. Bastion servers also provide RDP and SSH connectivity to the workloads sitting behind the bastion, as well as further inside the network. This figure below shows the architecture of an Azure Bastion deployment. In this diagram: 1. The Bastion host is deployed in the virtual network. 2. The user connects to the Azure portal using any HTML5 browser. 3. The user selects the virtual machine to connect to. 4. With a single click, the RDP/SSH session opens in the browser. 5. No public IP is required on the Azure VM.

Azure Firewall

Azure Firewall is a managed, cloud-based network security service that protects your Azure Virtual Network resources. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. You can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks. Azure Firewall uses a static public IP address for your virtual network resources allowing outside firewalls to identify traffic originating from your virtual network. The service is fully integrated with Azure Monitor for logging and analytics.

Azure Front Door

Azure Front Door is a global, scalable entry-point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. With Front Door, you can transform your global consumer and enterprise applications into robust, high-performing personalized modern applications with contents that reach a global audience through Azure. Azure Front Door enables you to define, manage, and monitor the global routing for your web traffic by optimizing for best performance and instant global failover for high availability. With Front Door, you can transform your global (multi-region) consumer and enterprise applications into robust, high-performance personalized modern applications, APIs, and content that reaches a global audience with Azure. Front Door works at Layer 7 or HTTP/HTTPS layer and uses anycast protocol with split TCP and Microsoft's global network for improving global connectivity. So, per your routing method selection in the configuration, you can ensure that Front Door is routing your client requests to the fastest and most available application backend. An application backend is any Internet-facing service hosted inside or outside of Azure.

Azure Firewall features

Built-in high availability. High availability is built in, so no additional load balancers are required and there's nothing you need to configure. Availability Zones. Azure Firewall can be configured during deployment to span multiple Availability Zones for increased availability. Unrestricted cloud scalability. Azure Firewall can scale up as much as you need to accommodate changing network traffic flows, so you don't need to budget for your peak traffic. Application FQDN filtering rules. You can limit outbound HTTP/S traffic or Azure SQL traffic to a specified list of fully qualified domain names (FQDN) including wild cards. Network traffic filtering rules. You can centrally create allow or deny network filtering rules by source and destination IP address, port, and protocol. Azure Firewall is fully stateful, so it can distinguish legitimate packets for different types of connections. Rules are enforced and logged across multiple subscriptions and virtual networks. Threat intelligence. Threat intelligence-based filtering can be enabled for your firewall to alert and deny traffic from/to known malicious IP addresses and domains. The IP addresses and domains are sourced from the Microsoft Threat Intelligence feed. Multiple public IP addresses. You can associate multiple public IP addresses (up to 100) with your firewall.

Application Gateway Routing

Clients send requests to your web apps to the IP address or DNS name of the gateway. The gateway routes requests to a selected web server in the back-end pool, using a set of rules configured for the gateway to determine where the request should go. There are two primary methods of routing traffic, path-based routing and multiple site hosting. Path-based routing Path-based routing enables you to send requests with different paths in the URL to a different pool of back-end servers. Multiple site routing Multiple site hosting enables you to configure more than one web application on the same application gateway instance. In a multi-site configuration, you register multiple DNS names (CNAMEs) for the IP address of the Application Gateway, specifying the name of each site. Application Gateway uses separate listeners to wait for requests for each site. Each listener passes the request to a different rule, which can route the requests to servers in a different back-end pool. Multi-site configurations are useful for supporting multi-tenant applications, where each tenant has its own set of virtual machines or other resources hosting a web application.

Demonstration - Create a Load Balancer to Load Balance VMs

Create a Load Balancer In this demonstration, you create a Load Balancer that helps load balance virtual machines. You can create a public Load Balancer or an internal Load Balancer. When you create a public Load Balancer, you must also create a new Public IP address that is configured as the frontend (named as LoadBalancerFrontend by default) for the Load Balancer. 1. Select + Create a resource, type load balancer. 2. Click Create. 3. In the Basics tab of the Create load balancer page, enter or select the following information, accept the defaults for the remaining settings, and then select Review + create: ✔️ Important This demonstration assumes that Standard SKU is chosen during the SKU selection process above. Create Load Balancer resources In this section, you configure Load Balancer settings for a backend address pool, a health probe, and specify a balancer rule. Create a Backend pool To distribute traffic to the VMs, a backend address pool contains the IP addresses of the virtual (NICs) connected to the Load Balancer. Create the backend address pool myBackendPool to include virtual machines for load-balancing internet traffic. 1. Select All services in the left-hand menu, select All resources, and then select myLoadBalancer from the resources list. 2. Under Settings, select Backend pools, then select Add. 3. On the Add a backend pool page, for name, type myBackendPool, as the name for your backend pool, and then select Add. Create a health probe To allow the Load Balancer to monitor the status of your app, you use a health probe. The health probe dynamically adds or removes VMs from the Load Balancer rotation based on their response to health checks. Create a health probe myHealthProbe to monitor the health of the VMs. 1. Select All services in the left-hand menu, select All resources, and then select myLoadBalancer from the resources list. 2. Under Settings, select Health probes, then select Add. Name Enter myHealthProbe. Protocol Select HTTP. Port Enter 80. Interval Enter 15 for number of Interval in seconds between probe attempts. Unhealthy threshold Select 2 for number of Unhealthy threshold or consecutive probe failures that must occur before a VM is considered unhealthy. 3. Select OK. Create a Load Balancer rule A Load Balancer rule is used to define how traffic is distributed to the VMs. You define the frontend IP configuration for the incoming traffic and the backend IP pool to receive the traffic, along with the required source and destination port. Create a Load Balancer rule myLoadBalancerRuleWeb for listening to port 80 in the frontend FrontendLoadBalancer and sending load-balanced network traffic to the backend address pool myBackEndPool also using port 80. 1. Select All services in the left-hand menu, select All resources, and then select myLoadBalancer from the resources list. 2. Under Settings, select Load balancing rules, then select Add. 3. Use these values to configure the load balancing rule: Name Enter myHTTPRule. Protocol Select TCP. Port Enter 80. Backend port Enter 80. Backend pool Select myBackendPool. Health probe Select myHealthProbe. 4. Leave the rest of the defaults and then select OK. Create backend servers In this section, you create a virtual network, create three virtual machines for the backend pool of the Load Balancer, and then install IIS on the virtual machines to help test the Load Balancer. Virtual network and parameters In this section you'll need to replace the following parameters in the steps with the information below: In this section you'll need to replace the following parameters in the steps with the information below: resource-group-name myResourceGroupSLB virtual-network-name myVNet region-name West Europe IPv4-address-space 10.1.0.0\16 subnet-name myBackendSubnet subnet-address-range 10.1.0.0\24 Create the virtual network In this section, you'll create a virtual network and subnet. 1. On the upper-left side of the screen, select Create a resource > Networking > Virtual network or search for Virtual network in the search box. 2. In Create virtual network, enter or select this information in the Basics tab: Project Details Subscription Select your Azure subscription Resource Group Select Create new, enter resource-group-name, then select OK, or select an existing resource-group-name based on parameters. Instance details Name Enter virtual-network-name Region Select region-name 3. Select the IP Addresses tab or select the Next: IP Addresses button at the bottom of the page. 4. In the IP Addresses tab, enter this information: IPv4 address space Enter IPv4-address-space 5. Under Subnet name, select the word default. 6. In Edit subnet, enter this information: Subnet name Enter subnet-name Subnet address range Enter subnet-address-range 7. Select Save. 8. Select the Review + create tab or select the Review + create button. 9. Select Create. Create virtual machines Public IP SKUs and Load Balancer SKUs must match. For Standard Load Balancer , use VMs with Standard IP addresses in the backend pool. In this section, you will create three VMs (myVM1, myVM2 and myVM3) with a Standard public IP address in three different zones (Zone 1, Zone 2, and Zone 3) that are later added to the backend pool of the Load Balancer that was created earlier. If you selected Basic, use VMs with Basic IP addresses. 1. On the upper-left side of the portal, select Create a resource > Compute > Windows Server 2019 Datacenter. 2. In Create a virtual machine, type or select the following values in the Basics tab: Subscription > Resource Group: Select myResourceGroupSLB. Instance Details > Virtual machine name: Type myVM1. Instance Details > Region > select West Europe. Instance Details > Availability Options > Select Availability zones. Instance Details > Availability zone > Select 1. Administrator account> Enter the Username, Password and Confirm password information. Select the Networking tab, or select Next: Disks, then Next: Networking. 3. In the Networking tab make sure the following are selected: Virtual network: myVnet Subnet: myBackendSubnet Public IP > select Create new, and in the Create public IP address window, for SKU, select Standard, and for Availability zone, select Zone-redundant, and then select OK. If you created a Basic Load Balancer, select Basic. Microsoft recommends using Standard SKU for production workloads. To create a new network security group (NSG), a type of firewall, under Network Security Group, select Advanced. 1. In the Configure network security group field, select Create new. 2. Type myNetworkSecurityGroup, and select OK. To make the VM a part of the Load Balancer's backend pool, complete the following steps: In Load Balancing, for Place this virtual machine behind an existing load balancing solution?, select Yes. In Load balancing settings, for Load balancing options, select Azure load balancer. For Select a load balancer, myLoadBalancer. Select the Management tab, or select Next > Management. 3. In the Management tab, under Monitoring, set Boot diagnostics to Off. 4. Select Review + create. 5. Review the settings, and then select Create. 6. Follow the steps 2 to 6 to create two additional VMs with the following values and all the other settings the same as myVM1: Setting Name VM 2 myVM2 VM 3 myVM3 Setting Availability zone VM 2 2 VM 3 3 Setting Public IP VM 2 Standard SKU VM 3 Standard SKU Setting Public IP - Availability zone VM 2 Zone redundant VM 3 Zone redundant Setting Network security group VM 2 Select the existing myNetworkSecurity Group VM 3 Select the existing myNetworkSecurity Group Create NSG rule In this section, you create a network security group rule to allow inbound connections using HTTP. 1. Select All services in the left-hand menu, select All resources, and then from the resources list select myNetworkSecurityGroup that is located in the myResourceGroupSLB resource group. 2. Under Settings, select Inbound security rules, and then select Add. 3. Enter these values for the inbound security rule named myHTTPRule to allow for an inbound HTTP connections using port 80: Source: Service Tag Source service tag: Internet Destination port ranges: 80 Protocol: TCP Action: Allow Priority: 100 Name: myHTTPRule Description: Allow HTTP Select Add. Repeat the steps for the inbound RDP rule, if needed, with the following differing values: Destination port ranges: Type 3389. Priority: Type 200. Name: Type MyRDPRule. Description: Type Allow RDP. Install IIS 1. Select All services in the left-hand menu, select All resources, and then from the resources list, select myVM1 that is located in the myResourceGroupSLB resource group. 2. On the Overview page, select Connect to RDP into the VM. 3. Log into the VM with the credentials that you provided during the creation of this VM. This launches a remote desktop session with virtual machine - myVM1. 4. On the server desktop, navigate toWindows Administrative Tools>Windows PowerShell. 5. In the PowerShell Window, run the following commands to install the IIS server, remove the default iisstart.htm file, and then add a new iisstart.htm file that displays the name of the VM: # install IIS server role Install-WindowsFeature -name Web-Server -IncludeManagementTools # remove default htm file remove-item C:\inetpub\wwwroot\iisstart.htm # Add a new htm file that displays server name Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername) 6. Close the RDP session with myVM1. 7. Repeat steps 1 to 6 to install IIS and the updated iisstart.htm file on myVM2 and myVM3. Test the Load Balancer 1. Find the public IP address for the Load Balancer on the Overview screen. Select All services in the left-hand menu, select All resources, and then select myPublicIP. 2. Copy the public IP address, and then paste it into the address bar of your browser. The default page of IIS Web server is displayed on the browser. To see the Load Balancer distribute traffic across all three VMs, you can customize the default page of each VM's IIS Web server and then force-refresh your web browser from the client machine.

Load Balancer Distribution modes

Five-tuple hash. The default distribution mode for Load Balancer is a five-tuple hash. The tuple is composed of the source IP, source port, destination IP, destination port, and protocol type. Because the source port is included in the hash and the source port changes for each session, clients might be directed to a different virtual machine for each session. Source IP affinity. This distribution mode is also known as session affinity or client IP affinity. To map traffic to the available servers, the mode uses a two-tuple hash (from the source IP address and destination IP address) or three-tuple hash (from the source IP address, destination IP address, and protocol type). The hash ensures that requests from a specific client are always sent to the same virtual machine behind the load balancer.

Application Gateway Configuration

Front-end IP address Client requests are received through a front-end IP address. You can configure Application Gateway to have a public IP address, a private IP address, or both. Application Gateway can't have more than one public and one private IP address. Listeners Application Gateway uses one or more listeners to receive incoming requests. A listener accepts traffic arriving on a specified combination of protocol, port, host, and IP address. Each listener routes requests to a back-end pool of servers following routing rules that you specify. A listener can be Basic or Multi-site. A Basic listener only routes a request based on the path in the URL. A Multi-site listener can also route requests using the hostname element of the URL. Routing rules A routing rule binds a listener to the back-end pools. A rule specifies how to interpret the hostname and path elements in the URL of a request, and direct the request to the appropriate back-end pool. A routing rule also has an associated set of HTTP settings. These settings indicate whether (and how) traffic is encrypted between Application Gateway and the back-end servers, and other configuration information such as: Protocol, Session stickiness, Connection draining, Request timeout period, and Health probes. Back-end pools A back-end pool references a collection of web servers. You provide the IP address of each web server and the port on which it listens for requests when configuring the pool. Each pool can specify a fixed set of virtual machines, a virtual machine scale-set, an app hosted by Azure App Services, or a collection of on-premises servers. Each back-end pool has an associated load balancer that distributes work across the pool Web application firewall The web application firewall (WAF) is an optional component that handles incoming requests before they reach a listener. The web application firewall checks each request for many common threats, based on the Open Web Application Security Project (OWASP). These include: SQL-injection, Cross-site scripting, Command injection, HTTP request smuggling, HTTP response splitting, Remote file inclusion, Bots, crawlers, and scanners, and HTTP protocol violations and anomalies. WAF is enabled on your Application Gateway by selecting the WAF tier when you create a gateway. Health probes Health probes are an important part in assisting the load balancer to determine which servers are available for load balancing in a back-end pool. Application Gateway uses a health probe to send a request to a server. If the server returns an HTTP response with a status code between 200 and 399, the server is deemed healthy. If you don't configure a health probe, Application Gateway creates a default probe that waits for 30 seconds before deciding that a server is unavailable.

Implementing Azure Firewall

Let's consider a simple example where we want to use Azure Firewall to route protect our workload server by controlling the network traffic. 1. Create the network infrastructure. In this case, we have one virtual network with three subnets. 2. Deploy the firewall. The firewall is associated with the virtual network. In this case, it is in a separate subnet with a public and private IP address. The private IP address will be used in a new routing table. 3. Create a default route. Create a routing table to direct network workload traffic to the firewall. The route will be associated with the workload subnet. All traffic from that subnet will be routed to the firewall's private IP address. 4. Configure an application rule. In production deployments, a Hub and Spoke model is recommended, where the firewall is in its own VNET, and workload servers are in peered VNETs in the same region with one or more subnets.

Distribute traffic with Azure Load Balancer

Load balancers use a hash-based distribution algorithm. By default, a five-tuple hash is used to map traffic to available servers. The hash is made from the following elements: Source IP: The IP address of the requesting client. Source port: The port of the requesting client. Destination IP: The destination IP of the request. Destination port: The destination port of the request. Protocol type: The specified protocol type, TCP or UDP.

Azure Traffic Manager

Microsoft Azure Traffic Manager allows you to control the distribution of user traffic to your service endpoints running in different datacenters around the world. Traffic Manager works by using the Domain Name System (DNS) to direct end-user requests to the most appropriate endpoint. Service endpoints supported by Traffic Manager include Azure VMs, Web Apps, and cloud services. You can also use Traffic Manager with external, non-Azure endpoints. Traffic Manager selects an endpoint based on the configured traffic-routing method. Traffic Manager supports a range of traffic-routing methods to suit different application needs. Once the endpoint is selected the clients then connect directly to the appropriate service endpoint. Traffic Manager provides endpoint health checks and automatic endpoint failover, enabling you to build high-availability applications that are resilient to failure, including the failure of an entire Azure region.

Azure Bastion

The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly in the Azure portal over TLS. When you connect using Azure Bastion, your virtual machines do not need a public IP address. Bastion provides secure RDP and SSH connectivity to all the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. With Azure Bastion, you connect to the virtual machine directly from the Azure portal.

Backend pool

The backend pool contains your application servers. These servers might be virtual machines, a virtual machine scale set, or applications running on Azure App Service. Incoming requests can be load balanced across the servers in this pool. The backend pool has an HTTP setting that references a certificate used to authenticate the backend servers. The gateway re-encrypts the traffic by using this certificate before sending it to one of your servers in the backend pool. If you're using Azure App Service to host the backend application, you don't need to install any certificates in Application Gateway to connect to the backend pool. All communications are automatically encrypted. Application Gateway trusts the servers because Azure manages them.

Azure Bastion - Key features

The following features are available: RDP and SSH directly in Azure portal: You can directly get to the RDP and SSH session directly in the Azure portal using a single click seamless experience. Remote Session over TLS and firewall traversal for RDP/SSH: Azure Bastion uses an HTML5 based web client that is automatically streamed to your local device, so that you get your RDP/SSH session over TLS on port 443 enabling you to traverse corporate firewalls securely. No Public IP required on the Azure VM: Azure Bastion opens the RDP/SSH connection to your Azure virtual machine using private IP on your VM. You don't need a public IP on your virtual machine. No hassle of managing NSGs: Azure Bastion is a fully managed platform PaaS service from Azure that is hardened internally to provide you secure RDP/SSH connectivity. You don't need to apply any NSGs on Azure Bastion subnet. Because Azure Bastion connects to your virtual machines over private IP, you can configure your NSGs to allow RDP/SSH from Azure Bastion only. Protection against port scanning: Because you do not need to expose your virtual machines to public Internet, your VMs are protected against port scanning by rogue and malicious users located outside your virtual network. Protect against zero-day exploits. Hardening in one place only: Azure Bastion is a fully platform-managed PaaS service. Because it sits at the perimeter of your virtual network, you don't need to worry about hardening each of the virtual machines in your virtual network.

Distributing Network Traffic

This table compares the Azure Load Balancer, Application Gateway, Traffic Manager, and Front Door. These techonologies can be used in isolation or in combination. Technology Azure Load Balancer Transport Layer (level 4) Application Gateway Transport Layer (level 7) Traffic Manager DNS Resolver Azure Front Door Layer 7 or HTTP/HTTPS Protocols Azure Load Balancer Any TCP or UDP Protocol Application Gateway HTTP, HTTPS, HTTP/2, & WebSockets Traffic Manager DNS Resolution Azure Front Door Split TCP-based anycast protocol Backends and Endpoints Azure Load Balancer Azure VMs, and Azure VM Scale Sets Application Gateway Azure VMs, Azure VM Scale Sets, Azure App Services, IP Addresses, and Hostnames Traffic Manager Azure Cloud Services, Azure App Services, Azure App Service Slots, and Public IP Addresses Azure Front Door nternet-facing services hosted inside or outside of Azure Network connectivity Azure Load Balancer External and Internal Application Gateway External and Internal Traffic Manager External Azure Front Door External and Internal

Load Balancer High Availability Options

To achieve high availability with Load Balancer, you can choose to use availability sets and availability zones to ensure that virtual machines are always available: Configuration Availability set Service level agreement (SLA) 99.95% Information Protection from hardware failures within datacenters Configuration Availability zone Service level agreement (SLA) 99.99% Information Protection from entire datacenter failure

Source IP Affinity Steps

To add session persistence through the Azure portal: 1. In the Azure portal, open the load balancer resource. 2. Edit the relevant line of the Load-balancing rules. 3. Change the value for Session persistence to Client IP.

Frontend port and listener

Traffic enters the gateway through a frontend port. You can open many ports, and Application Gateway can receive messages on any of these ports. A listener is the first thing that your traffic meets when entering the gateway through a port. It's set up to listen for a specific host name, and a specific port on a specific IP address. The listener can use an SSL certificate to decrypt the traffic that enters the gateway. The listener then uses a rule that you define to direct the incoming requests to a backend pool.

Select a Load Balancer Solution

Two products are available when you create a load balancer in Azure: basic load balancers and standard load balancers. Basic load balancers allow: 1. Port forwarding 2. Automatic reconfiguration 3. Health probes 4. Outbound connections through source network address translation (SNAT) 5. Diagnostics through Azure Log Analytics for public-facing load balancers Basic load balancers can be used only with Virtual machines in a single availability set or a virtual machine scale set. Standard load balancers support all of the basic features. They also allow: 1. HTTPS health probes 2. Availability zones 3. Diagnostics through Azure Monitor, for multidimensional metrics 4. High availability (HA) ports 5. Outbound rules 6. A guaranteed SLA (99.99% for two or more virtual machines) Standard load balancer can use any virtual machines or virtual machine scale sets in a single virtual network.

Application Security Groups

Use application security groups within a network security group to apply a security rule to a group of resources. It's easier to deploy and scale up specific application workloads. You just add a new virtual machine deployment to one or more application security groups, and that virtual machine automatically picks up your security rules for that workload. An application security group allows you to group network interfaces together. You can then use that application security group as a source or destination rule within a network security group. For example, your company has a number of front-end servers in a virtual network. The web servers must be accessible over ports 80 and 8080. Database servers must be accessible over port 1433. You assign the network interfaces for the web servers to one application security group, and the network interfaces for the database servers to another application security group. You then create two inbound rules in your network security group. One rule allows HTTP traffic to all servers in the web server application security group. The other rule allows SQL traffic to all servers in the database server application security group.

Augmented security rules

You use augmented security rules for network security groups to simplify the management of large numbers of rules. Augmented security rules also help when you need to implement more complex network sets of rules. Augmented rules let you add the following options into a single security rule: 1. multiple IP addresses 2. multiple ports 3. service tags 4. application security groups

Service tags

You use service tags to simplify network security group security even further. You can allow or deny traffic to a specific Azure service, either globally or per region. Service tags simplify security for virtual machines and Azure virtual networks, by allowing you to restrict access by resources or services. Service tags represent a group of IP addresses, and help simplify the configuration of your security rules. For resources that you can specify by using a tag, you don't need to know the IP address or port details. You can restrict access to many services. Microsoft manages the service tags (you can't create your own). Some examples of the tags are: VirtualNetwork - This tag represents all virtual network addresses anywhere in Azure, and in your on-premises network if you're using hybrid connectivity. AzureLoadBalancer - This tag denotes Azure's infrastructure load balancer. The tag translates to the virtual IP address of the host (168.63.129.16) where Azure health probes originate. Internet - This tag represents anything outside the virtual network address that is publicly reachable, including resources that have public IP addresses. One such resource is the Web Apps feature of Azure App Service. AzureTrafficManager - This tag represents the IP address for Azure Traffic Manager. Storage - This tag represents the IP address space for Azure Storage. You can specify whether traffic is allowed or denied. You can also specify if access is allowed only to a specific region, but you can't select individual storage accounts. SQL - This tag represents the address for Azure SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL, and Azure SQL Data Warehouse services. You can specify whether traffic is allowed or denied, and you can limit to a specific region. AppService - This tag represents address prefixes for Azure App Service.


Set pelajaran terkait

World History chapter 22 & 23 - Abeka

View Set

Pharmacology test 1-Psychotherapeutic drugs

View Set

NSG 203 CH 78 NERVOUS SYSTEM DISORDERS

View Set