IB Computer Science - Option C Web Science

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

sidebar

Text set off from the main body of text in a text box that provides additional information for the reader.

scraping

black hat. copies content from popular websites. often to get more visits and sell advertisments

interoperability

the capability of two or more computer systems to share data and resources, even though they are made by different manufacturers. the computers need to agree on how to exchange the information bringing in standards.

Describe how a domain name server functions

A domain name server (DNS) translates the URL from a clients request (example "http://www.facebook.com") and resolves a physical network address (IP) from it (example: "204.74.112.1").

robots.txt

A file written and stored in the root directory of a website that restricts the web-crawlers from indexing certain pages of the website. not all bots follow the standards, malware can ignore robots.txt, saves time by focusing crawlers on important sections of code.

intellectual property

A product of the intellect, such as an expressed idea or concept, that has commercial value.

Describe the purpose of a URL

Defines a pathway to a specific resource. It allows to link different resources, which creates the basis for navigating the WWW.

link baiting

white hat. 'click-bait'. gives people incentives to click on a link usually done by writing sensational or controversial content or title.

guest blogging

white hat. allowing others to submit content for the blog in exchange for exposure and a link to the author's website

robots. txt white hat

white hat. getting indexed by web crawler and prevent duplication of content preventing the indexing of redundant information.

quality content

white hat. make your page more valuable to a search engine's index and other web pages might link to your web page if it has a high standard of content.

suggest how developers can create pages that appear more prominently in search engine results.

called SEO. Different techniques. Big part of web marketing as search engines do not disclose how exactly they work, making it hard for developers to perfectly optimise pages.

ambient intelligence advantages

home care systems for elderly or handicapped. real-time shopping. personal information provides better means for risk assessment for insurance companies.

quantity of returns

how many pages are indexed by a search engine

authorities

it contains valuable information and if it is truly relevant to the search query. Assumed has high number of in-links.

ontology disadvantages

labourious requires expertise hard to implement on large-scale only one perspective on meaning

out-links

links that point to a different page than the one in question. i.e. if page W has an out-link, it is an URL of another page, e.g. page Z.

in-links

links that point to the page in question. i.e. if page W has an in-link, there is a page Z containing the URL of page W.

general features of lossy compression

looks for common patterns in data to compress. part of original data is lost. compresses to low file size. usually includes settings for compression quality. (gives users options) as data becomes compressed the quality deteriorates.

static websites disadvantages

low scalability hard to update higher cost in the long term to update content

static websites advantages

lower cost to implement flexibility

IN-sectin

made up nodes that can reach the SCC. cannot be reached by the SCC.

Tendrils

made up of nodes that are not connected to the SCC. connected to either the IN or the OUT sections.

OUT-section

made up of nodes that can be reached by the SCC. cannot reach the SCC.

describe the role of network architecture, protocols and standards in the future development of the web.

make data more meaningful in order to create a semantic web. requires new standards but the ideas of today can still be used. important for these to be secure enough, to be extensible and scalable. scalability is alos important as the web grows and cloud applications are becoming prominent.

SLATES

make it possible to implement new features. Search, Links, Authoring, Tags, Extensions, Signals

small world graph

mathematical graphs whereas not all nodes are directly neighbours, but any given pair of nodes can be reached by a small number of links.

properties of small world graph

mean shortest-path length will be small. many clusters. analogy: airline flights where you can reach any city most likely in just under three flights. maximises connectivity. minimises number of connections.

metrics.

measurements or "scorecards" that marketers use to identify the effectiveness of different strategies or tactics

connectivity

metric to discuss how well parts of a network connect to each other.

number of hits

metric. a page hit is when a page is downloaded

search engine referral

metric. different search engines have different market shares; knowing which search engine traffic comes from helps to find improvements.

search engine share of referring visits

metric. how the web page has been accessed e.g. direct access or referral pages, can indicate how meaningful traffic is.

search engine terms and phrases

metric. identify the most common search keywords and optimise

conversion rate by search phrase

metric. percentage of users that sign up coming from a search term

quality of returns

metric. quality of how a site gets placed in a return.

time taken

metric. time spent on the page

number of sites receiving traffic from search engines

metric. which individual pages on large sites are being accessed.

mobile computing advantages

mobile computing. increase in productivity. entertainment. cloud computing. portability.

characteristics of mobile computing

mobile computing. portability. social interactivity. context sensitivity. connectivity. individual customisation.

mobile computing disadvantages.

mobile computing. quality connectivity. security concerns. power consumption.

effects of decentralised and democratic web advantages

more control over data. making surveillance harder. avoid censorship. possibly faster speed.

decentralised web disadvantages

more difficult to maintain harder to develop and implement increased need for security

outline the principles of searching algorithms used by search engines

most common are PageRank and HITS. include other factors, time page existed, frequency of key words on the page and other factors.

conclusion of decentralised web

most of the internet is still centralised as most websites follow the client-server model, which is further encouraged by corporations wanting to make profit.

corner

most of the time used for logo with a link to the main page

public cloud disadvantages

no control over sensitive data. security risks.

web diameter

no standard definition, usually the average distance between two random nodes. important because it is an indicator of how quickly one can reach some page from any starting page on average. important for a crawler.

folksonomy disadvantages

no vocab control informal metadata unstructured low expressive power ambiguity

Tubes

nodes not part of the SCC. made up of node linking the IN or OUT section.

<head>

not visible on a page, but contains important information about it in form of metadata

6 degrees of separation

notion that everyone in the world is separated from all other individuals by at most 6 additional nodes in a social network.

general features of loseless compression

only can compress 50% to their original file size. important that the information compressed does not affect a loss in information.

ambient intelligence disadvantages

privacy concerns. surveillance. discrimination. risk of automating too much. reliability, maintainability. compatibility between systems. dystopian ideas.

authentication

proving someones identification. usually done through a username and password or two-factor authentication.

folksonomy

takes the form of tags and is also called "collaborative metadata" because metadata can be added by different users. It provides informal knowledge for classifying and organising content. help improve searchability, classification and content visibility and are part of most social networks.

site optimisation.

white hat. manipulate content wording and site structure and meta tags to maximise search engine efficiency.

lossy compression

data compression techniques in which some amount of data is lost. This technique attempts to eliminate redundant information.

decentralised web advantages

decentralisation. higher fault tolerance. stability. scalability. privacy. data portability more likely. independence from large corporations. potential for high performance systems.

identification

defined as the process of claiming to be one's identity. this process is important for privacy and is required for authentication.

web graph

describes directed links between web pages in the WWW. it is a directed graph with directed edges.

explain why distributed systems may act as a catalyst to a greater decentralisation of the web

distributed systems consist of many different nodes that interact with each other. decentralised by design.

public cloud advantages

easy and inexpensive because the provider covers hardware, application and bandwidth costs. scalability to meet needs. no wasted resources. costs calculated by resource consumption only.

evaluation of lossy compression

significant reduction in file size. most important use is streaming multimedia files and VoIP does not work with all file types consider compression ratio (file size to quality)

dynamic website disadvantages

sites are usually based on templates, less individual sites higher initial cost usually larger codebase

Static websites

sites that only rely on the client-side and don't have any server-side programming. The website can still be dynamic through use of JavaScript for things like animations.

parallel web-crawling advantages

size of the web grows, increasing the time it would take to download pages. scalability; a single process cannot handle the growing web. network load dispersion; as the web is geographically dispersed, dispersing crawlers disperses the load. network load reduction

grid computing disadvantages.

software and standards still developing non-interactive job submission --> unreliable.

grid computing advantages

solves larger more complex problems in less time. easier collaboration. makes efficient use of existing hardware. less chances of failure.

open standards

standards that follow certain open principles. public availability. collaborative development. royalty free. voluntary adoption. e.g. file formats (HTML), protocols, programming languages.

SCC

strongly connected core from or to which many node lead to/from. can reach all nodes in OUT. cannot reach nodes IN.

ontology advantages

structured high expressive power explicit meaning formal specification of knowledge domains created by knowledge engineers

hyperlinks

"Hot spots" or "jumps" to locate another file or page; represented by a graphic or colored and underlined text.

Web 1.0

"read-only web". Static documents which are formatted as HTML files. Little user interaction or content contribution.

Web 2.0

"read-write web". More dynamic, enabled user interaction. Appearance of blogging platforms like Blogger. New concepts like blogs, social networks or video-streaming platforms. Improved design, JavaScript and dynamic content.

Web 3.0

"read-write-execute" Internet of Things. Smarter searches and the presentation of relevant data fitting into context. User input becoming more meaningful.

surface web

(open internet) web sites freely accessible to all users over the internet. web that can be reached by a search engine. static and fixed pages. e.g. Google, Facebook, YouTube

Blog

A Web log, which is a journal or newsletter that is updated frequently and published online. static blog--> html, css. dynamic blog--> database, MySQL, javascrip, PHP, content management system (cms)

Wiki

A collaborative website that can be edited by anyone that can access it; can be vandalised by users with ill intent. ability to change quickly.

protocols

A set of rules governing the exchange or transmission of data between devices. TCP/IP/FTP.

web-crawler

A softbot responsible for following hyperlinks throughout the Internet to provide information for the creation of a web index.

personal page

A web page created by an individual that contain valid and useful opinions, links to important resources, and significant facts. static usually. normally created using some form of website creator like Wix.

dynamic website

A website that generates a web page directly from the server; usually to retrieve content dynamically from a database. This allows for a data processing on the server and allows for much more complex applications.

PageRank Algorithm

Algorithm used by Google to rank websites in their search engine results. Pages are given a rank depending on on how many in-links there are to a page and that determines the order in which pages appear. The importance of an in-link depends on the PageRank of the linking page. PageRank counts links per page and determines which pages are the most important.

server-side scripting

Also called back-end scripting; scripts are executed on the server before the web page is downloaded by a client. (e.g. if you log-in to an account, your input is sent to the server to be checked before downloading your account). These are the parts of the web page that must be refreshed whenever there is a change. e.g. CGI

peer-to-peer computing

Also known as P2P. A process in which people share the resources of their computer by connecting directly and communicating as equals.

banner

An advertisement appearing across the top of a Web page

Forums

An online discussion group, much like a chat room; database for sorting posts, cms, some language, e.g php.

table of contents

An ordered list of the topics in a document, along with the page numbers on which they are found. Usually located at the beginning of a long document. normally in a sidebar.

CSS

Cascading Style Sheets contain hierarchical information about how the content of a web page will be rendered in a browser.

CGI

Common Gateway Interface. A programming standard that allows visitors to fill out form fields on a Web page and have that information interact with a database, possibly coming back to the user as another web page. CGI may also refer to Computer-Generated Imaging, the process in which sophisticated computer programs create still and animated graphics, such as special effects for movies.

XML

Extensible Markup Language, a way of writing data in a tree-structured form by enclosing it in tags. human readable. used for representation of arbitrary data structures.

XLST

Extensible stylesheet language. the CSS of XML. styling language for XML. data presentation. data transformation by parsing a source tree of nodes out and transform it into something different.

FTP

File Transfer Protocol uses a TCP-based network to pass files from host to host. files can also be manipulated/modified remotely. control information (log-ins) are sent separately from the main file differing FTP from HTTP.

Client-side scripting

Happens in the browser of the client. It is used for animations, form validation and also to retrieve new data without reloading the page. e.g. in a live-chat

HTML

Hypertext Markup Language; semantic markup language. standard language for web documents, uses elements enclosed by tags to markup a document.

HTTPS

Hypertext Transfer Protocol Secure. Encrypts HTTP traffic with SSL or TLS. ensures authentication of website using digital certificates and intergreity and confidentiality through encryption of communication.

ISO

International Organization for Standardization; non-govt. org. that develops and publishes international standards. these standards ensure safety, reliability and quality for products and services.

IP

Internet Protocol. The main delivery system for information over the Internet. part of TCP/IP protocol suite. defines the format of a packet. IPv4 uses 32-bit address. IPv6 uses 128-bit address.

Distinguish between the Internet and the World Wide Web

Internet is a interconnected set of networks and computers that permits the transfer of data governed by protocols like TCP/IP. World Wide Web is a set of hypertext-linked resources identified as URIs that are transferred between a client and a server via the Internet.

deep web

Invisible web; the area of the web generally not accessible by search tools. includes content that requires authentication or VPN access. e.g. private social medias, emails, content protected by a paywall, e.g. online newspapers, academic research databases.

how does HITS work?

It finds the top 200 pages based on the occurrence of keywords. then finds all the pages that link to the initial set of pages. these sets are combined and the algorithm gives each page a hub weight and an authority weight. the algorithm then lists the pages based on their weight.

characteristics of P2P

P2P. decentralised. each peer acts as client and server. resources and contents shared amongst all peers and shared faster than client to server. malware can be faster distributed.

types of grid computing devices

PCs and servers

RDF

Resource Description Framework. data model. based on the idea that giving information meaning through 'triples'. Triples make a statement about resources, following a structure of subject-predicate-object. Triples seen in URIs. <http://en.wikipedia.org/wiki/Tony_Benn> <http://purl.org/dc/elements/1.1/title> "Tony Benn" 1. subject 2. predicate 3. actual title.

black hat

Search engine optimization tactics that are counter to best practices such as the Google Webmaster Guidelines, basically breaks the rules and regulations of search engine guidelines.

mobile computing

Technology that allows transmission of data, voice, and video via any wireless-enabled computing device, without a fixed physical connection.

ubiquitous computing

The condition in which computing is so woven into the fabric of everyday life that it becomes indistinguishable from it. e.g. smart watches.

<body>

The main part of the page document. This is where all the visible content goes in.

TCP

Transmission Control Protocol - provides reliable, ordered, and error-checked delivery of a stream of packets on the internet. TCP is tightly linked with IP and usually seen as TCP/IP in writing. receives data from an application and divides it up ready for IP. Provides error-checking which can lead to the re-sending of packets.

URI

Uniform Resource Identifier - specifies how to access a resource on the Internet. more general than a URL

URL

Uniform Resource Locator; a location or address identifying where documents can be found on the Internet; a Web address. primarly used for http and other protocols like ftp. follows a specific syntax.

private cloud

a company owns the data centres that deliver the services to internal users only.

grid computing

a computer network where each computer shares its resources with all other computers in the system.

loseless compression

a data compression algorithm that allows the original data to be perfectly reconstructed from the compressed data.

describe how the web can be represented as a directed graph

a graph is a set of nodes that can be connected through edges. nodes = each web page. edge = hyperlink

adaptive crawling

a more advanced crawler. will prioritise on what to crawl and adapt the queue live so that more relevant information is indexed first. the document is also analysed for relevance.

search engine

a program that searches for and identifies items in a database that correspond to keywords or characters specified by the user, used especially for finding particular sites on the World Wide Web.

bow-tie structure

a proposed structure for the web. consists of SCC, IN, OUT, Tubes, Tendrils.

effectiveness of a search engine is determined by the assumptions

a search engine will return results base on the algorithms and parameters used when being developed. these are based on assumptions and therefore a search engine can only be effective as long as these assumptions are met.

navigation bar

a set of hyperlinks that give users a way to display the different pages in a website

sub-graph

a set of pages that are part of the internet. can be a set of pages linked to a specific top or pages that deal with part of an organisation.

characteristics of grid computing

all computers are spread out but connected. grid computing develops a 'virtual supercomputer' in a system.

White Hat SEO

also known as 'ethical' SEO, tactics focus on human audience compared to search engines

semantic web

also web of data or web 3.0. extension of the World Wide Web where information is given meaning better enabling computers and people to work together. this allows computer to perform more complex tasks based on meaning of information.

client-server architechure

an application gets split into the client side and the server side. a client-server application does also not necessarily need to be working over the internet, but could be limited to a local network.

JavaScript

an object-oriented computer programming language commonly used to create interactive effects within web browsers.

ontology

an ontology is an explicit, formal specification of a shared conceptualisation. conceptualisation includes the concept of some piece of information itself, but also relations between concepts, attributes and instances. ontology is also called authoritative metadata.

continuation

area of the web page preventing the sidebar to extend to the bottom of the web page.

outline future challenges to search engines as the web grows

as it grows, it becomes harder to filter out the most relevant information and paid results play and important role. some data becomes more semantic as well and search engines will need to adapt.

effects of decentralised and democratic web disadvantages

barrier to usability --> novices. less practical sometimes. DNS alternatives necessary for legible domain names. higher maintenance costs.

HITS algorithm

based on idea that keywords aren't the only thing that matters. introduces authorities and hubs. the algorithm is based on mathematical graph theory, where a page represents a vertex and links between pages are represented by edges.

hybrid cloud

best of both private and public clouds. sensitive and critical application run in a private cloud, while the public cloud is used for apps that require high scalability.

keyword stuffing

black hat. An unpopular practice of including a huge variety of keywords in the header of an HTML document in the hopes that a search engine will display it even when the content of the page is not relevant to the search.

doorway pages

black hat. Pages optimized for a single keyword that re-direct to the real target page.

link farming

black hat. a group of websites that all hyperlink to every other site.

blog comment spamming

black hat. automated posting of hyperlinks for promotion on any kind of publicly accessible online discussion board.

paid links

black hat. paying for links on other sites to receive more visits

cloaking

black hat. presenting different content to web spiders than to users by delivering content based on IP addresses.

hidden texts and links

black hat. text that can't be seen by the end user, but can be found by the search engine. considered search spam.

content automation

black hat. the process of creating content of the website in an automatic manner by using a tool or script.

cookies

client-side. hold data specific to a website or client and can be accessed by either the server or the client. the data in a cookie can be retrieved and used for a website page. some sites require cookies to function. cookies are used to transport information from one session to another and eliminate the use of server machines with huge amounts of data storage --> smaller and more efficient.

climate change.

collective intelligence. Climate CoLab is a project by the MIT for collective intelligence where people work together to create, analyse and select detailed proposals for what to do about climate change.

astronomy

collective intelligence. Galaxy Zoo is a project where people contribute to classify a large number of stars and galaxies.

finance

collective intelligence. in one paper scientists have analysed collective trends from Twitter posts in order to try and predict the stock market.

reddit place

collective intelligence. ran by Reddit during 3 days. users of the platform could colour a single pixel on a 1000 by 1000 pixel canvas every 5 mins. The result was amazing considering different interests of different users.

cloud computing advantages

elasticity. (scale up or down depending on demand) pay per use. (elasticity allows user to pay for the resources that they actually use) self-provisioning (can create account on your own)

type of ubiquitous computing devices

embedded devices, IoT devices, mobile computing devices, networking devices.

explain why there needs to be a balance between expressivity and usability on the semantic web.

expressivity is the guiding factor to how semantic information will be because it wants computer to understand the meaning of information. giving information more expressive power may come at an expense of usability. but the semantic web needs to be easy to use and folksonomies which are low in expressive power allow the system to suggest similar content to a user.

technologies in development of web 2.0

flash, silverlight (embedded multimedia). ajax allows asynchronous communication, new information can be downloaded without reloading the web. XML provide standard formats for exchanging data between the client and the server. RSS feeds allow regular content update notification.

importance of hubs and authorities

for connectivity, a larger number of hubs improves connectivity, while authorities are more likely to decrease connectivity as they usually do not link to many other pages.

how does a web-crawler work?

for each page it finds a copy is downloaded and indexed. In this process it extracts all links from the given page and then repeats the same process for all found links. tries to find as many pages as possible.

degree distribution

for predicting the development of the web. the degree of a page is the number of connections it has, which can further be categorised into incoming and outgoing links. the number of pages with a higher degree decreases.

HTTP

hypertext transfer protocol; the protocol used to transfer and exchange hypermedia. part of IP protocols. user agent requests some resource from a server and the server responds with the required information. done through two php codes. GET and POST.

Search Engine Pages

indexes content from the internet or an intranet, serves related links based on a users queries. uses web crawlers. back-end is programmed in an efficient language, e.g. C++

dynamic website advantages

information can be retrieved in an organised way allows for content management systems low ongoing cost, unless design changes

describe the aims of the semantic web

information is often difficult to process for machines due to ambiguity and different data formats. the semantic web offers to express information in a way that is unambiguous. the idea of the semantic web is to create something similar to a database on a larger scale with data from the internet.

<title>

inside head, displayed in tab of the web page.

explain the function of a browser

interprets and displays information sent over the internet in different formats, plug-ins allows multimedia to be displayed, retrieves information from the internet via hyperlinks

parallel web-crawling disadvantages

overlapping; might index pages multiple times quality; if a crawler wants to download an important page first, it might not work. communication bandwidth; parallel crawlers need to communication which takes significant bandwidth. if parallel crawlers request the same page frequently over a short time it will overload the servers.

hubs

pages that are relevant to finding authorities. contain useful links towards them. assumed has high number of out-links.

adaptive crawling example

queue = LoadSeed(); while (queue is not empty){ dequeue url request document store document // process document analyze document to determine if it's relevant update prioritizer with information about this document // extract and queue links parse document for links eliminate document for links prioritize links add prioritized links to queue }

simple algorithm example

queue = LoadSeed(); while (queue is not empty){ dequeue url request document store document for later processing parse document for links add unseen links to queue }

folksonomy advantages

quick and easy collaborative creation can be used on a large-scale can include multiple social meanings importance for the social web

ambient intelligence

related to ubiquitous computing and is based on the idea of computing being integrated unobtrusively in the environment, providing intelligent services as people need it. e.g. smart home

parameters search engines use to compare

relevance: determined by programs like PageRank. the bigger the index the more pages the search engine can return that have relevance. user experience: search engines look to find the 'best' results for the searcher and part is the user experience, includes ease of use, navigation, direct and relevant information, etc.

cloud computing

relies on client-server architecture, but places focus on sharing resources over the internet. often offered as a service to individuals and companies.

evaluation of loseless compression

same data as the initial file it is required that the installation files and information is the same in compressing and decompressing phase. no loss in quality in loseless compression.

private cloud disadvantages

same high costs for maintenance, staffing, management. additional costs for cloud software.

private cloud advantages

scalability. self-provisioning. direct control changing computer resources on demand. limited access through firewalls improves security.

evaluate methods for searching information on the web.

search engines can be used for navigational search or research. web search engines like Google aim at searching the entire publicly accessible web. Other search engines like Google Scholar search for academic papers only.

outline the purpose of web-indexing in search engines

search engines index websites in order to respond to search queries with relevant information as quick as possible. For this reason it stores information about indexed pages in its database. This way search engines can quickly identify pages relevant to a search query. also has the purpose of giving a page a certain weight to allow for ranking later.

XML - server-side

server-side. flexible way to structure data and can therefore be used to store data in files or to transport data. It allows data to be easily manipulated, exported, or imported. websites can then be designed separately from data content. XML uses RSS feeds for example.

database

server-side. organised collection of data, allows retrieval of data based on queries. data accessed through a database management system. The DBMS usually provides some sort of library which scripts in various languages can make queries and read or manipulate data. (e.g. PHP, JavaScript)

public cloud

service provided by a third party and are usually available to the general public.

standards

set of technical specification that should be adhered to. allow for functionality. browser standards include correct use of HTML, JavaScript, etc.

distinguish between the text-web and the multimedia-web

text web is text-based info where as multimedia web consists of the utilisation of a combination of different media.

simple algorithm

the algorithm will start from a seed (a number of pages from which to start) and put all the URLs in a queue. It will then loop the queue until it is empty, each time dequeuing an URL, requesting its document, indexing this document while also collecting links from it. These links will be added to the queue, if they haven't been visited yet.

relationship between meta-tags and web-crawlers

the description meta-tags provide the indexer with a short description of the page. the keywords meta-tag provides keywords about the page. while meta-tags used to play a role in ranking, it has been overused and therefore they aren't considered as important anymore. crawlers use meta-tags to compare keywords to the content of the page for a certain weight. as such, they are still important.

copyright

the exclusive legal right, given to an originator or an assignee to print, publish, perform, film, or record literary, artistic, or musical material, and to authorize others to do the same.

collective intelligence

the intellectual outcome of a groups of people working together, which could range from families to international companies. the internet plays an important as it can connect people that wouldn't have done so otherwise.

privacy

the seclusion of information from others. this can relate to health care records, sensitive financial institutions, residential records. essential to prevent unauthorised access.

explain why the web may be creating unregulated monopolies

the world wide web should be a free place where anybody can have a website. normally comes at a cost of a domain name. in addition to reach an audience further marketing through SEO is usually necessary. Therefore, it is normally bettwer to publish content on an exsisting platform, e.g. Twitter or Blogspot. Leads to unregulated monopolies.

describe how folksonomies and emergent social structures are changing the web

they can improve search results. they can be used to detect trends. they can be used to discover new content. can be used for a more individual experience. customised advertising by analysing user preferences and interests through tags.

web-crawler limitations

they might look at meta-data contained in the head of web pages, depends on the crawler. a crawler might not be able to read dynamic content as they are simple programs.

types of P2P devices

usually PCs

diameter

usually the average path length between two random nodes.

<meta> tags

various type of meta tags, gives search engines information about the page, but are also used for other purposes, such as to specify the charset used.

types of mobile computing devices

wearables, smartphones, tablets, laptops, transmitters and other hardware involved in cellular networks.


संबंधित स्टडी सेट्स

A&P 1 lecture- Ch. 1 objectives and review questions

View Set

11.2 Wellness and Nutrition; Nutrition

View Set

Basic Principles of Life and Health Insurance an Annuities

View Set

Chapter 1 Principles of Psychology

View Set

Principles of Macroeconomics (Chp 13 -16)

View Set