Software Engineering Final Prep

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

UID: Content Architecture

Content architecture involves designing the properties and relationships between data structures, application objects, and algorithms. Requires enumerating all types/sources of content, information associated with tasks (information displayed or entered), information accessible through application (task domain object and related object attributes and history, help info, related info links). Structure data by identifying entry points, containment associations, and all relevant associations

ITS: architecture and integration

The architecture of a system resolves its functionality into distinct components, and defines the functions of each component, and the interfaces between them. When we talk about integration strategy, it is natural to describe that strategy in terms of which components will be introduced, in which order, with what subsets of their specified interfaces and functionality. We should consider in our architecture how it can be designed to support an integration strategy. Integration strategy ultimately dictated by the schedule and the architecture, architecture is compromise.

ITS: development and integration

If you construct and integrate software in the wrong order, it's harder to code, harder to test, and harder to debug. If none of it will work until all of it works, it can seem as though it will never be finished. It too can collapse under its own weight during construction— the bug count might seem insurmountable, progress might be invisible, or the complexity might be overwhelming— even though the finished product would have worked.

TERA: project size vs productivity

at small sizes, the biggest influence on productivity is the skill of the programmer. As project size increases, team size and organization become greater influences on productivity. Productivity on small projects can be 2- 3 times as high as productivity on large projects, and productivity can vary by a factor of 5- 10 from the smallest projects to the largest.

BD: correct bug fixing

fixing bugs is easy, the hard part of debugging is finding the defect. Because it is an easy task it is especially error prone. Understand the problem before you fix it, the overall program, confirm the defect diagnosis, and do not rush solutions. You should keep track of the original code in case the fix causes errors, and only change the code for good reason. Check your fix, and make sure the test cases that diagnosed the problem show that it has been resolved. Rerun the whole program to check for side effects, and allow unit tests to expose the defect, then look for similar defects.

TERA: risk management plan

for each high exposure risk formulate proactive mitigation measures (what can we do to reduce its likelihood, what can we do to reduce its expected impact), and a reactive monitoring and management plan (what danger signs should we watch for, how will we respond when problem happens). Then perform a cost-benefit comparison of alternatives and determine the most cost-effective approach. Incorporate risk management plan into plans and schedules.

TTC: test harness

impose a standard form on each test case (set-up, test execution, assessment, clean-up, reporting). make it easy to write and add new test cases, organize test cases into suites, run desired tests regularly or when needed, and produce easily digested reports of each test run.

TERA: technical risks

incorrect or incomplete requirements, designs that can't be built or won't work, schedule based on inadequate analysis, team lacks experience w/tools, techniques, problems that prove harder than expected

AP: how much process

just enough to ensure success, consider the following: how large and complex is the problem? how well understood is the problem? how critical is quality? how critical are cost and schedule?

STP: release phases

major release (may not be upwards compatible with previous releases), minor release (minor additions, enhancements, bug-fixes, reasonable expectation of upwards compatibility), update release (only bug-fixes), patches (mini-package, containing only a few changed files, if bug is serious organization might want to make updates versions of particular programs available immediately).

AP: SCRUM roles: product owner

manages and prioritizes the task backlog, provides continuous feedback to developers, and decides whether or note delivery is acceptable.

BD: root cause analysis

many bugs are not random, as people repeat the same mistakes, and there is inadequate training, tools, and methodology. After a problem has been found and fixed, identify the root cause of the defect, and understand how it was made and how we failed to find it. Statistical studies of root causes allow us to identify clusters and find ways to eliminate them.

R: MTBF

mean time between failures, In most cases this is equal to the MTTF (mean time to failure, given by multiplying the time period by the probability of failure). There are, however, situations where the Mean Time to first Failure is very large, but the Mean Time to subsequent Failures is much smaller. In such cases, the steady-state MTBF would be equal to the Mean Time to subsequent Failures.

CCD: Release/Reuse Equivalency principle

only components that are released through a tracking system can be effectively reused. The granule of reuse is the granule of release, and this granule is the package. create opaque, cohesive packages, and if only certain classes are needed, package and support packages containing only those

TERA: managements risks

schedules imposed without commitment, external dependencies with no back-ups, failure to assign required resources, doing unnecessary/less important work, failure to monitor progress and problems, delayed or ineffective problem response, poor inter/intra-group communication

PST: Sprint velocity

self-calibrating measure that includes productivity, overhead, bugs, competing priorities. Uncertainty is recognized and quantified, including consistency of recent velocity measurements and convergence of backlog grooming/estimates. Sprint Velocity enables better projections of completion and replaces optimistic promises with extrapolations. This in turn enables better management as it guides the choice of what to accept in the next sprint, it highlights the backlog, productivity, and distractions, and makes the product owner a partner in development.

RAD: functions vs procedures

semantic distinction, functions take in parameters and returns its only value through the function itself. Procedures can take input, modify, and output parameters with no limits. (function commonly operates as a procedure then returns a status value.

UID: U/I Challenges

tools growing more complex (more tasks and options, more complex environments, integration with more applications), non-homogenous users (different needs, goals, technical depth, backgrounds), disconnect between designers and users (different skill, experience, goals for program function). Users do not receive formal training, so UI must be obvious or self-teaching, difficulty getting technical support, largest element of user experience, can make or break product

PST: PERT charts

a two-dimensional graphical representation of the dependency relationships among activities (or resources). The partial orderings discovered through dependency analysis are often represented in a PERT chart.

STP: alpha testing

alpha products are usually incomplete, missing functionality and documentation, very buggy, and non standard in terms of installation and management. Alpha suites are carefully selected, ones that are prepared to deal with problems and can be trusted to exercise the product. The goals of alpha testing are to gather feedback on key features and content, and provide early access for partners and key customers

BD: scientific debugging method

1. Stabilize the error. 2. Locate the source of the error (the "fault"). a) Gather the data that produces the defect. b) Analyze the data that has been gathered, and form a hypothesis about the defect. c) Determine how to prove or disprove the hypothesis, either by testing the program or by examining the code. d) Prove or disprove the hypothesis by using the procedure identified in 2( c). 3) Fix the defect. 4) Test the fix. 5) Look for similar errors.

TTC: 100% code coverage?

100% branch coverage may not be enough, as it doesn't necessarily cover all combinations of decisions including a wide range of loop iterations. 100% path coverage may not be possible (impossible combinations, errors that should never happen). higher coverage is always better, large numbers of paths may hide problems, supplement coverage with reviews.

STP: Pareto principle

80% of cycles are spent in 20% of the code, often even more. testing should be focused on the part of the code that makes up most cycles

ITS: choosing integration strategy

A good integration strategy is one that, in combination with the architecture: makes it possible to build and test a relatively complete (though perhaps initially boring) product from day one, allows components to be constructed independently, and in a natural order (as dictated by risk, resources, and other external drivers), allows components to be integrated in an incremental fashion to ease testing and debugging, provides a meaningful framework for each new component to integrate into, makes it possible to integrate and exercise earlier components without having to wait for later components. delivers incrementally usable and testable functionality with successive integrations, and reduces the impact that delays or difficulties in the construction of one component can have on others.

R: run time audits

A software audit is an internal or external review of a software program to check its quality, progress or adherence to plans, standards and regulations. Run time audits assess the quality at run/compile time.

R: fire-walls/barricades

Barricades are a damage-containment strategy. One way to barricade for defensive programming purposes is to designate certain interfaces as boundaries to "safe" areas. Check data crossing the boundaries of a safe area for validity, and respond sensibly if the data isn't valid.

STP: system vs unit testing

Contrasting goals (is the component ready to integrate vs ready to ship), context (testing components in relative isolation vs testing the entire assemblage), and focus (component functionality and specifications vs whole system functionality and specifications, whole system behavior).

AP: Agile Alliance principles

Customer satisfaction by early and continuous delivery of valuable software. Welcome changing requirements, even in late development. Deliver working software frequently (weeks rather than months) Close, daily cooperation between business people and developers Projects are built around motivated individuals, who should be trusted Face-to-face conversation is the best form of communication (co-location) Working software is the primary measure of progress Sustainable development, able to maintain a constant pace Continuous attention to technical excellence and good design Simplicity—the art of maximizing the amount of work not done—is essential Best architectures, requirements, and designs emerge from self-organizing teams Regularly, the team reflects on how to become more effective, and adjusts accordingly

R: fault handling

Depending on the specific circumstances, you might want to return a neutral value, substitute the next piece of valid data, return the same answer as the previous time, substitute the closest legal value, log a warning message to a file, return an error code, call an error-processing routine or object, display an error message, or shut down— or you might want to use a combination of these responses.

RAD: developing routine design

Design following principles of cohesion, abstraction and encapsulation.

TTC: limitations of testing

Developer tests tend to be "clean tests" and tests for if code works as opposed to testing for all the ways the code breaks "dirty tests". Aim for 5 dirty tests to 1 clean test. developer tests also tend to have optimistic views of code coverage, and skip more sophisticated kinds of test coverage (branch coverage). Testing alone is not adequate quality assurance and needs to be supplemented with other techniques including independent testing and collaborative construction techniques.

AP: Agile vs planned process?

Don't put too much faith in paper process, key deliverables are working software and key goal is customer satisfaction. Continuous change is a given, and the best process is collaboration, with regular communication between stake-holders and frequent small updates and good feedback. Over emphasis on task definition is myopic as people not processes solve problems.

ITS: daily builds and smoke tests

Every file is compiled, linked, and combined into an executable program every day then the program is put through a smoke test, a simple check to see whether the product "smokes" when it runs. Build and smoke test should be automated, developers should check in their code frequently, smoke test should be kept up to date with code, expanding with it, broken builds should be a rare occurrence. Reduces risk of low code quality, system is kept in known, good state.

BD: core dump

For an application program, a core dump is a copy of the contents of the writeable (e.g. data and stack) segments from its address space. The operating system regularly takes such snapshots whenever a process dies unexpectedly, and there are often means to force a core dump of a running program. If the operating system finds itself in trouble, it may save the entire contents of memory, as well as system logs and other information. In some cases, the problem is explained because we can see the entire sequence of events, but in some cases the core dump is just an unfortunate victim, and we can only see how bad the problem is and vaguely where it came from.

UID: Web UIs vs GUIs

HTML browsers are more standardized than the various GUI toolkits (where choice of toolkit will affect design and navigation metaphor choices). As some GUI's are becoming WEB front-ends, there is improved standardization. For the web, the user gives up full control and shares responsibility for the UI with their users and their client hardware and software

PST: status tracking

Ideally daily process of monitoring what people are working on, what problems they are encountering, and comparing that with the plan of record. The primary purpose is to identify potential problems as quickly as possible, so that they can be dealt with before the slippage becomes unrecoverable. This can be done in formal meetings (e.g. a SCRUM stand-up) or informal hallway conversations. Typical problems uncovered in a status tracking might be: the person assigned to a task has concluded that the assignment they were given no longer makes sense, someone has been working on a task, originally estimated at 1/2 day, for three days, someone is unable to make progress on a task because they are blocked on an external dependency, someone is not working on the expected task because some other (presumably more important) task has preempted it.

TERA: confidence bands

In situations we are estimating, we have imperfect knowledge to produce high confidence estimates. To deal with the uncertainty, we can quantify it with probability bands. Clearly state assumptions and give estimates within a confidence band, if the confidence band is too large you need to be prepared to reduce uncertainty through investigation.

CCD: benefits of OO-design

OO languages provide valuable features including mechanisms to support class inheritance, information hiding, support for interface polymorphism, and automatic object instantiation. This allows us to organize designs into modular classes, decide what is public/private, and encourages us to reuse common components.

TTC: testability

Observability of key events and state, Controllability, Clear Definition of Correctness, Logical Isolatability of Functionality

BD: bug report life cycle

Once a bug is confirmed, a developer is assigned to it, and either fixes, or makes another resolution regarding the bug. Once it is resolved, if the resolution is verified and accepted or just accepted it is closed, otherwise it is reopened and reassigned to the developer to come up with a more adequate solution.

BD: priority and severity

Severity is better defined of these two terms, and is usually taken to be a measure the consequences. System failure and loss of important data might be considered to be very severe, confusing error messages might be considered to be of relatively low severity. For grading severity, we might consider permanent data or service loss, inability to use primary functionality, impaired use of primary functionality (or inability to use secondary functionality), minor inconvenience, or suggestion. A priority is a concept with a much wider range of meanings, but it is the measure of important of fixing these bugs, combining severity with likelihood of encountering the bug, availability of a work-around, and expectations for the product.

STP: performance principles

The Pareto principle, performance requires real measurement (our intuition is usually wrong), performance demands eternal vigilance (if we aren't getting faster we're getting slower), performance is mostly about design (code optimization is only occasionally useful)

ITS: incremental integration

The process is described by starting with dummy versions of every components, running automatic and regular builds, and integrating each small change as it is ready. You must be able to test integrated code when it is added. Benefits include being able to build and test the system from day 1, problems are found sooner and more quickly, problems are spread out over the schedule, the schedule and quality are more predictable, and there is less wasted re-engineering.

TTC: unit testing frameworks

Unit testing frameworks are most often third-party products that are not distributed as part of the compiler suite. They help simplify the process of unit testing, having been developed for a wide variety of languages.

STP: testing and bug discovery

Unit testing tends to focus on the correctness of single component (the one I built), whereas system testing is primarily concerned with the correct operation of the entire system. Unit testing also tends to have a clearly bounded scope (does each of these mechanisms operate correctly), whereas system testing attempts to answer a more nebulous question: "is it good enough?" There is a nuanced relationship between testing and bug discovery. Different types of testing find different types of bugs. Specification based test cases are likely to have a high efficacy at finding bugs that compute output as a function of input, but a relatively low efficacy at finding bugs in the management of internal state. White-box test cases are likely to have a high efficacy at finding static algorithmic and data management errors, but dynamic interaction problems tend to be harder to find by testing. Targeted stress tests may be effective at finding resource exhaustion problems and race conditions, but are often useless for finding functionality problems. Thus, our expectation of how many (new) bugs we will find when we begin the next phase of testing depends on how similar the new phase of testing is to testing that has already been done.

M: when/how to comment

Useful Code Commenting includes prose or pseudo-code summaries (explain the purpose of the code that follows, high level overview of the algorithm, enumerate pre-conditions that must hold), rationale and references (remind reader of important issues, explain non-obvious choices, refer reader to more detailed discussions), draw attention to module sub-sections (start of a new class or routine)

BD: hypothesis confirmation

You gather the test data that divulged the defect, analyze the data that has been produced, and form a hypothesis about the source of the error. You then design a test case or an inspection to evaluate the hypothesis, and you either declare success (regarding proving your hypothesis) or renew your efforts, as appropriate. When you have proven your hypothesis, you fix the defect, test the fix, and search your code for similar errors.

BD: stack trace

a list of all of the subroutine calls (and if we are lucky parameters) that were on the stack at the time of death. For interpreted languages the stack trace may be produced directly by the interpreter. It is often possible to infer the cause of the error directly from the stack trace. If, for instance, the failure was precipitated by an addressing error, resulting from trusting a bad argument, we can usually see the whole history of where that argument came from, and can often infer the cause of the problem from this information alone.

RAD: Table driven code

a scheme that allows you to look up information in a table rather than using logic statements. tables become more attractive as logic chains become more complex. Address how to look up entries and what to store in the table. In message reading routine example, table driven code is more economical as it can describe the format of each message in a table instead of encoding it in program logic.

PST: Gantt charts

a two-dimensional graphical representation of when tasks are to be performed. Time is represented on the x-axis, and may be marked with calendar dates or merely relative to the start of the project. Each task is represented in a horizontal row, by a box enclosing the starting and ending dates of that task. Tasks may be grouped and ordered by resource, by time, or by some work-breadown-structure. Gantt charts can be used to represent plans, history, or both. For plans, it is common to include arrows indicating dependencies. For history, it is common to include indications of planned vs actual start and completion dates.

R: error detection

adding error detection code to check returns and test pre-conditions. some errors are difficult to detect. General principles of error detection include being close as possible to errors first appearance, general checks that find many problems, simple w/high probability of correctness, low overhead as it should execute often

AP: agile philosophy

address people and teamwork issues, focus more directly on real goals, put principles and methodology over process, still enumerate required process activities but avoid over-specifying tasks and deliverables. Value individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, and responding to change over following a plan.

TERA: agile estimation

agile methods are not date driven and strive for customer satisfaction, flexible requirements demand flexible dates. design and estimation is thus progressive, as tasks and requirements are refined, the next few tasks on the list are designed, and estimates are based on those detailed designs. Refactor as need for change is recognized.

TTC: testing and risk

all lines of code not equally likely to be buggy (Pareto principle), bugs more likely to be found in subtle or complex code. Factors associated with bug risk are complexity of algs, data structures, specifications, defining correctness assertions, possible modes of failure, multi-step transactions, global variables, and resources shared with other threads. Invest lion's share of testing effort based on likelihood of errors in implementation, likelihood errors will not turn up in basic testing, and likely impact to program functionality of errors in the module.

PST: earned value analysis

an estimate of construction size and effort, yields an expected cost for each sub-task (budgeted value). The Earned Value of an effort is the value of all the tasks completed so far, can be weighted (1/4 earned at start, 3/4 at completion, or partial values for partial progress like tests passed).

CCD: classes in non OO-languages

any module in any language should implement a general and intuitive class, export a well abstracted interface to that class, employ good information hiding, be usable without change for many purposes, be organized and grouped with related modules.

STP: release criteria

any rational discussion of release criteria should begin with a clear statement of goals for the release. Release goals are highest level of requirements. Conditions for release typically involve some combination of functionality, quality, and delivery time. Functionality and date requirements easily specified and tested. Hard to state quality criteria in measurable terms, need to find measurable product characteristics that are well correlated with our goals, are valid predictors of customer experience, and rephrase our quality requirements as quantitative statements. Some common metrics are known defects, defect discovery rates, quality assurance process, tests passed, code coverage, hours of testing, and feedback from limited availability trials. Impossible to fully specify in advance all of the criteria that must be satisfied. Release criteria are living documents, subject to continuous evaluation, critical they are established early in the project when people are focused on goals and plans.

R: defensive programming

based on defensive driving, recognition that programs will have problems and modifications, a smart programmer will develop code accordingly. main idea is that if a routine is passed bad data, it won't be hurt, even if the bad data is another routine's fault. can handle garbage in by checking values of data from external sources, checking values of all routine input parameters, and deciding how to handle bad inputs. Best form of defensive coding is not inserting errors in the first place by using iterative design, writing pseudo code, writing test cases before writing code, and having low-level design inspections, should use in TANDEM with defensive programming.

TTC: black-box testing

based on specific functionality not design knowledge, does it perform all specified functions with all specified options correctly, does it reasonably handle obvious errors such as invalid requests and inabilities to perform the requested operation. Common for acceptance criteria. Some examples of smarter black-box testing are using specifications (what functions to test not values), boundary value analysis chooses parameters near edges, orthogonal array testing chooses well distributed combinations of parameters ---> parameter choice heuristics

STP: beta testing

beta product is near final form, with all functionality and documentation complete, and few significant known problems. beta sites are the real customers, they put the product to real use and have agreed to give feedback. The goals of beta testing are to confirm the product is ready to ship, and to gather last minute feedback.

TTC: specification-based testing

black-box testing, test cases should be based on component specifications, since components are designed to meet specifications, should write test cases to the assertions in the specifications. Test cases not based on the specifications are therefore testing not required functionality (irrelevant or invalid). If it is still not possible to asses the acceptability of the component, the problem must be missing specifications.

STP: load/stress testing

bugs are often found in special cases, such as resource exhaustion, error handling, and unlikely combinations of events. Stress tests create these continuously. We can think of them as traffic generators, running at full capacity with changing, random mixes of requests. The provide continuous error generation with random error selection. Stress tests can run for many days or longer, and shake out many hard-to-cause problems. Load generation is derived from the capacity from the system specification, and performance should be measured at load. Many bugs involve concurrent operations (locking, allocation/freeing, protection, etc.), and these require automatic load generation to generate traffic of specific types and at calibrated rates.

TTC: orthogonal array testing

by letting the parameter space define an N-dimensional solid, we can sample all corners of the N-dimensional solid, and then start choosing random points in N-space. This technique yields a fairly uniform test density throughout the N-dimensional solid.

CCD: class diagrams

class diagrams show the classes of the system, their interrelationships (including inheritance, aggregation, and association), and the operations and attributes of the classes. Class diagrams are used for a wide variety of purposes, including both conceptual/domain modeling and detailed design modeling. CLASSES AND STATIC RELATIONSHIPS. boxes are classes, with name, attributes, methods, and lines represent the following class relationships (inheritance, aggregations, compositions, dependency, and associations)

R: use of assertions

code that is used in development (routine or macro usually) that allows a program to check itself as it runs. When an assertion is true, everything is operating as expected. enable programmers to quickly flush out mismatched interface assumptions, errors that creep in when code is modified... Use assertions to document assumptions made in the code and flush out unexpected conditions. Use assertions for conditions that should never occur (error handling code checks for anticipated conditions that might occur). Assertions are a useful tool for documenting preconditions and postconditions.

RAD: UML swim-lane diagrams

combine interaction and activity diagrams. They describe multi-threaded flow of control (remote procedure call and return, asynchronous message exchanges). Parallel threads in parallel columns, each having its own activity diagram. Horizontal lines represent messages from the sender to the receiver, and horizontal bars represent joins (awaiting reception of a message).

M: statement layout

common but outdated to generally limit line length to 80 characters to discourage nesting, decreased readability, use spaces for clarity, formatting continuation lines (what to do with the part of the statement that spills over), one statement per line, one data declaration per line

R: defect, fault, error, failure

defect: "bug", something in design or implementation that doesn't conform to specifications, or pre-disposes a component to error or failure. fault: an incident where a defect is exercised causing a component to malfunction. Depending on nature of defect, incident can be precipitated by different use and external events. error: an incident where a component malfunctions, the result of a fault occurring in a defective component. failure: an incident where a system fails to provide services as expected. Reporting an error is not a failure, all errors do not give rise to failures, and all component failures do not necessarily give rise to system failures.

BD: similar bugs

defects tend to occur in groups, if you pay attention to the kinds of defects you make, you can correct all bugs of that kind. requires thorough understanding of the problem.

CCD: Liskov Substitution Principle

derived sub-class can substitute for its parent, subclasses must be useable through the base class interface without the need or the user to know the difference. "design by contract"

RAD: routine names

describe everything a routine does (outputs and side effects), avoid meaningless, vague, or wishy-washy verbs, make names as long as necessary and avoid differentiating by numbers

RAD: UML state diagrams

describe state and transition models, where events drive state changes. They are similar to activity diagrams, where activity boxes have 2 compartments, with the state name in the top portion and processing steps in the bottom portion. Arrows represent state transitions, and labels describe conditions triggering the transition. Processing steps can also be placed on lines.

TTC: test plan

describes how testing will be used to gain confidence about the correctness of specified components. It may contain an overview of component design, aspects of operation we want to test, risk analysis, individual test case definitions (might be deferred to more detailed suite definitions), and rationale for the chosen tools techniques and schedules. It will included product testing phases and specify when and by whom the suites will run, how we will determine if a product has "passed", and the test suites to be used and goals for each.

CCD: dependency inversion principle

describes overall structure of a well designed OO app, modules with high level policy should not depend on modules that implement low level details. Both high level policy and low level details should depend on abstractions

PC: pair programming

development practice and doesn't eliminate need for reviews. Difficult design/coding is done in pairs. two heads to solve difficult problems, two eyes to see mistakes, serving defined contemporary roles (design, challenge, suggest, code, review, test), improves productivity and reduces errors. Partners must be able to work well together, be able to carry their own weight, teams should not be re-used as different people have different strengths, and only use on big enough problems.

PC: collective code ownership

encourages everyone to contribute new ideas to all segments of the project. Any developer can change any line of code to add functionality, fix bugs, improve designs or refactor. No one person becomes a bottle neck for changes. Each developer to create unit tests for their code as it is developed. All code that is released into the source code repository includes unit tests that run at 100%. Code that is added, bugs as they are fixed, and old functionality as it is changed will be covered by automated testing. Now you can rely on the test suite to watch dog.

R: failure mode enumeration

enumerate all likely errors: services (resources, access), hardware (transient and persistent), data (bad configuration, corrupt data), and communication errors (protocol, link, node). enumerate general classes of internal errors resulting from failures of your own components, and include problems reported to support (involving similar products)

TERA: estimation principles

estimates are not guesses , a good estimate comes from good data and good analysis. Estimates are not precise or deterministic, they are not a numbers, but confidence ranges. Estimates start out very rough and are revised throughout life of project. Get estimates from multiple sources, ask different people to make the estimates, use multiple techniques to develop estimates. Estimate at a low level of detail for each component and activity, step, task, and sub-task. Low level estimates invite you to consider the full range of requirements, the design of the components, methodology, kinds of problems that are likely to occur, and planning and estimating go hand in hand.

R: FIT rates

failures in time rates, For components that are not expected to fail during their life times, we can extrapolate the probability of failure to a very large population (or time period) and look at the number of expected failures per billion hours.

UID: U/I Principles

familiar and consistent (contexts, objects, actions, icons, positions, style, metaphors, nouns, navigation), intuitive and responsive (current context clear, clear how to perform common operations, presented information easy to interpret), simple and convenient (doesn't expect user to remember not, anticipating needs without forcing user down path or overwhelming with options), communicative and responsive (current context, state, options clear, status of in progress operations clear, completion and status of recent operation), helpful and robust (default and option menus for input, input and request validation, meaningful error messages), adaptable and configurable (different user roles different views, multiples modes, configurable context, options, and views, options for language, locale, and accessibility options.

TTC: test case specifications

functional requirements and enumeration of test cases. Should include the name of the test, the component and functional area it tests, a simple statement of the assertion it tests, what pre-conditions must be established, what operations will be invoked (and how), what results will be captured (and how) and how we determine whether or not those results are correct.

R: causes of failures

functionality is implemented incorrectly because we don't understand correctness, or we make mistakes without realizing they are wrong. We also make optimistic assumptions, that data will be good, inputs will be valid, and requests will be completed successfully, thus allowing errors to cascade to greater failures. Error handling is seldom well specified, architecture often ignores error handling, and error handling is seldom well tested.

M: code layout as documentation

fundamental theorem of formatting, good visual layout shows the logical structure, makes code more readable, helps experts to perceive, comprehend and remember important features of programs.

TTC: white-box testing

further reach than black-box testing. White-box testing identifies equivalence partitions of input combinations to make better black-box testing parameter choices, reaches code poorly exercised by primary interfaces (state results from combinations of operations and interactions between components), reaches functionality not described by the requirements, and areas of perceived risk.

M: elements of maintainability

given a program, module, or routine, it is easy to understand the structure, role, and how the code works. Given a problem or enhancement, it is easy to safely make and test the required changes, and be confident the program works correctly.

BD: good bug reports

good bug reports clearly describe the problem (what should have happened vs what did happen), the impact (consequences to affected users), the affected systems (what platforms, what versions of what software), how to cause the problem (ideally with simple test case, developing minimal failure cases is work), and are dispassionate, separate facts from opinion. The attributes include ID, title, status, description, and history.

AP: when agile is more/less appropriate

good for simple of poorly understood projects. Approach is intrinsically iterative. Agile processes can benefit from best practices, getting it right from the start. Agile processes can improve waterfall projects by incorporating team factors, unstable requirements, and shorter sprints for safety and predictability.

RAD: pseudo code

higher level of abstraction than code, it is programming language independent, and can be written at the level of intent. It is good for roughing out an algorithm, as it is faster to write, easier to refine and evolve, and easily translated into code. It is good for design reviews because it is fast to read and review but still contains key algorithmic elements.

R: throwing exceptions

how code can pass along errors or exceptional events to the code that called it. code that has no sense of error context can return control to other parts of the system that might have a better ability to interpret the error and respond. They can be used to straighten out tangled logic in a stretch of code. throw throws an exception object. code in some other routine up the calling hierarchy will catch the exception within a try-catch block. Only throw exceptions for conditions that are truly exceptional, complexity tradeoff where encapsulation is weakened as code calls a routine needs to know what exceptions might be thrown inside the code that is called. Exceptions thrown are part of the routine interface.

R: durability

how safely our data is being stored (for example annual petabyte durability: the probability of zero data loss, for a petabyte of data, over the course of one year)

PC: range of individual productivity

huge variations, from studying professional programmers with an average of 7 years experience the ratio of initial coding time between the best and worst programmers was about 20 to 1, the ratio of debugging times over 25 to 1, of program size 5 to 1, and of program execution speed about 10 to 1. They found no relationship between a programmer's amount of experience and code quality or productivity. more general statements such as "There are order-of-magnitude differences among programmers" are meaningful and have been confirmed by many other studies of professional programmers.

M: comment types

individual lines for a line that needs more explanation, or to record an error (end-line comments), commenting to describe paragraphs of code results in one or two sentence comments that describe paragraphs of code, commenting data declarations to describe aspects of the variable that cannot be described by the variable name, commenting control structures is a natural comment (describes what a loop is doing), commenting routines, commenting classes, files, and programs should give a meaningful top level view of the contents and role

UID: usability testing

informal (playing with prototype, report on it, developers may be present in testing) vs formal (performed in a controlled usability testing lab, users given scenario to perform alone, developers not present, session recorded, usability analysts produce formal report)

RAD: defining parameters

input modify output order, keep parameters in consistent order, use all parameters, status or variable errors last, don't use routine parameters as working variables, limit number to about 7

PC: extreme programming practices

minimalism in design and implementation (prototype to find best solutions, strive for simplest solution, just-in-times software development, regular refactoring), pair programming, standards based with reusable technology, test driven development, continuous integration

RAD: UML sequence diagrams

model the flow of logic within your system in a visual manner, enabling you both to document and validate your logic, and are commonly used for both analysis and design purposes. Sequence diagrams are the most popular UML artifact for dynamic modeling, which focuses on identifying the behavior within your system. Model usage scenarios, logic of methods, and logic of services

RAD: appropriate macro use

modern languages provide alternatives to the use of macros, such as template, const, inline, enum, and typedef. macros result in inferior service from tools such as debuggers, cross-reference tools, and profilers. They are useful for supporting conditional compilation, but should generally not be used in place of a routine.

BD: gathering evidence to infer cause

no substitute for a thorough understanding of how the software in question is supposed to work, understanding how things work enables you to formulate hypotheses about what situations might give rise to the observed symptoms. understanding how things work enables you to make predictions about other (observable) consequences of hypothesized events. understanding how things work enables you to recognize anomalous results that, while not involved in the failure path, might be evidence of a problem.

BD: execution trace

normal instructions execute normally, tracer interprets all system calls, logs system call and parameters, allows OS to execute the system call, logs the return values, allows program to continue executing. It generates a huge amount of information and gives a good idea of what the program was doing.

CCD: object diagram

object diagrams, sometimes referred to as instance diagrams, are useful for exploring "real world" examples of objects and the relationships between them. relationships among instances, not general (possible) class relationships

TERA: project size and activities

organization has larger influence on project's success or failure. Increasing project size increases need for formal communication, leads to change in project activities. Larger projects require more architecture, integration work, and system testing to succeed. On a small project, construction is the most prominent activity. Construction properties scale up proportionally, but other activities scale up faster. The following activities grow at a more-than-linear rate as the project size increases: Communication, Planning, Management, Requirements development, System functional design, Interface design and specification, Architecture, Integration, Defect removal, System testing, Document production.

M: external documentation

outside the source code, tends to be at high level compared to the code and at low level compared to documentation compared to the problem definition, requirements, and architecture activities. Includes unit development folders informally containing notes used in construction, or more formally a detailed design document that is lower level, and describes alternatives and reasoning for approaches that were selected (sometimes only exists in the code itself)

CCD: package diagrams

package diagram can be used to organize any type of UML classifier, typically create package diagrams to organize either classes, data entities, or use cases. allow you to organize model elements into groups

DP: concurrency patterns

patterns to handle the issue of multi-threading and concurrency in programming

AP: failing of formal processes

places form over substance, people are goaled on process deliverables, but the real goals are customer satisfaction and return on investment. Bureaucracy may greatly burden small projects, and the formal process makes assumptions that may not be true. It is a lowest common denominator solution, and can improve the work of weak teams while limiting strong contributors.

UID: principles of CLI design

power (all options set from command line), brevity (short specification strings, good defaults), cohesion (program only does one thing, basic functions, all related to one set of objects or functions), familiarity and understandability (standard/mnemonic arguments, consistent argument syntax rules, usage messages with unrecognized options)

BD: triage

prioritizing bugs by sorting them into groups, disastrous bugs that must be fixed asap that render the product unacceptable, serious bugs that should be fixed before shipment that compromise product value, and minor bugs, where fixes can be deferred to later.

PC: challenges of pair programming

problems pairing different categories of programmers can result in expert coder typing while novice passively watches, novices being paired need extra supervising, expert average pairings can be a problem when average programmer will not progress, average programmer doesn't interact enough with expert, and average programmer doesn't get it. Most beneficial form of pairing is two programmers of roughly the same ability, but often mixed pairing ends up being the norm. other problems include rushing, overconfidence, and everyone wanting to be in control.

TERA: backlog grooming

product backlog is a prioritized list of desired product functionality. It provides a centralized and shared understanding of what to build and the order in which to build it. It is a highly visible artifact at the heart of the Scrum framework. Good backlog grooming ensures that it is detailed appropriately (not all items at the same level of detail at same time), emergent (never complete or frozen as long as there is a product being developed or maintained), estimated (each item has a size estimate corresponding to effort required to describe them, items further down the backlog may not have an estimate or T-shirt sizes), prioritized (ideally prioritized, but it is unlikely all items in the backlog are prioritized, items near the bottom are much less useful to prioritize. Grooming refers to creating and refining PBIs, estimating PBIs, and prioritizing PBIs. Collaborative effort, and should ensure that the items at the top of the backlog are ready to be moved into a sprint so that the development team can confidently commit and complete them by the end of a sprint.

CCD: reasons to define classes

provide needed and obvious objects from problem domain, provide better behaved objects, compartmentalize complexity (bringing related code together and simplifying interface seen by rest of system), making applications more stable and portable (isolating implementation specifics within a class, abstraction protects app from future evolution)

STP: bug arrival rates

rate of bug discovery follows a fairly predictable curve of time spent testing vs discovery rate. We can detect the peak to extrapolate the rate bugs will be discovered in the future under this testing regimen. It is a well respected predictor of undiscovered bugs, and commonly included in ship criteria

TTC: equivalence partitioning

related to mathematical equivalence classes, combination of parameters that yield the same or equivalent computations. Validating one set of parameters thus lets you assume it works for all parameter combinations from the same equivalence partition. partitions may not be obvious

R: reliability and availability

reliability is the likelihood that a component or system will not fail during a specified period of time, quantified by probability of failure, MTTF, FIT rate. Availability refers to the likelihood that a component or system will be providing service at any particular instant (in steady-state operation over a long period of time). The availability of a system is a function of both its failures and its repairs. Availability is quantified by MTBF or as a probability: Av = expected up-time / ( expected up-time + expected down-time) which can be approximated as MTBF / (MTBF + MTTR). Availability is affected by reliability (in that more failures imply less availability), but also incorporates the expected repair or recovery time (Mean Time To Repair).

PC: contributors to productivity

religious issues (language, style, IDE...), physical environment (floor space, interruptions, ability to silence phone/divert calls, privacy...)

UID: examples: game useability

seven stages: form the goal, form the intention, specify the action, execute the action, perceive the state of the world, interpret the state of the world, and evaluate the outcome. Hobbling a stage intentionally can make an interaction or game more challenging for the player.

TERA: risk assessment

similar to software failure mode enumeration. Enumerate all plausible sources of risk (unclear/unstable requirements, poorly understood technical problems, staff size, skills, experience, and tools, and complexities of the domain and platform). Then describe each in detail, rate for likelihood and impact, order by risk exposure (likelihood * impact) and decide which warrant inclusion in the plan.

TTC: code coverage

simple goal, to be certain we have tested all the code. Can measure code coverage statically by analyzing code, or at runtime with automatic instrumentation. We want to identify unexecuted code segments, define test cases to exercise them, and run them to verify coverage and result.

PC: extreme programming values

simplicity (do what is needed and asked for, but no more. This will maximize the value created for the investment made to date), communication (Everyone is part of the team and we communicate face to face daily. We will work together on everything from requirements to code), feedback (We will talk about the project and adapt our process to it, not the other way around, take every iteration commitment seriously by delivering working software), respect (Everyone gives and feels the respect they deserve as a valued team member), courage (We will tell the truth about progress and estimates).

RAD: routine design principles

simplicity and clarity (obvious what routine does and how to use it), good abstraction important (well thought out functions easier to use), information hiding important ( avoid shared data, interactions means complexity and bugs, encapsulate nasty details within a routine), cohesion (shorter routines easier to understand). Not all routines are simple, must be put into writing (record design, present it to others for review, make basis for implementation, white-box testing, tool for future training and maintenance). Should document purpose, parameters, functionality, returns, key assumptions, requirements, issues, non-obvious decisions and algorithms.

DP: class patterns

singleton (single instance, semi global object with one master and shared information, create class with only one instance, private constructor, gives uniqueness advantages of a global with class modularity, synchronization costs and corruption susceptibility), adaptor, proxy (local software interacts with remote resources frequently, creates a local simulation of remote resource with intelligently managed communication, complexity limitations from remote interactions and awkward modes of failure), object pool (reduce cost of object instantiation, creates a pool class with acquire and release, recycles a few objects many times, results in performance gains, but complexity handling allocation failures and with sterilization and clean up

AP: SCRUM sprints

small number of well understood tasks, team commits to completing all of the work, ends with working software delivered to product owner.

ITS: continuous integration

software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly. Without deferring integration, you don't need to predict how long integration will take, and the blind spot is eliminated. Removes one of the biggest barriers to frequent deployment.

CCD: class associations

source refers to target. Can describe a reference, di-directional association, aggregation, or composition).

CCD: class dependencies

source uses target

TERA: size estimation

sources of information on project scope should start with formal descriptions of requirements (customer requirements/proposal, system specifications, or software requirements specification. Must communicate level of risk and uncertainty in an estimate to all concerned, and must re estimate as more scope information is determined. Can estimate by analogy (similar projects, adding up estimated sizes of similar components), or by counting product features and using an algorithmic approach such as function points. Product features can include subsystems, classes/modules, methods/functions, or more detailed features include number of screens, dialog, files, database tables, reports, messages, so on

PST: SMART milestones

specific (no ambiguity), measurable (should be quantitative or at least quantifiable, and there must be an objective process for measuring its achievement), achievable, relevant (directly related to the achieving of project goals), and timely (how many milestones and the time intervals between them are spaced depending on the situation, more junior people more frequently, and customers much less frequently).

M: module layout

standard preamble (copyrights, version, module overview), imports/includes and type definitions, static then global then private data declarations, routine pre-declarations if required, constructors and destructors if required, and public and private routine definitions in some logical order

STP: system testing

system functionality and error handling, does the system do what it is supposed to do and handle errors correctly? Installation and upgrade testing (do all parts install and configure correctly on all platforms, do upgrades preserve persistent state), usability testing, security testing, interoperability testing (platforms, devices, different clients and servers), performance testing (whole system capacity, throughput, response time), stress testing (whole system overload, resource exhaustion, error recovery).

PST: quantifying progress

task completions are obvious milestones (specific, measurable, achievable, relevant SMART) but they may be poor measures of progress as they are not usually evenly spaced measures of work, and may be too large for fine grained tracking. Better measures enable fine grained (e.g. daily) tracking, meaningful schedule tracking, and meaningful budget tracking.

AP: product backlog

team must takes tasks from backlog in order, discussing meaning and design of each item, estimate amount of work involved in points, decide how much work can be handled. Tasks near the top must be "ready" or clearly understood and defined, broken into sprint-sized pieces, with no blocking issues or dependencies.

TTC: test cases and suites

test cases are scripts, programs or other mechanisms that exercise a software component to ascertain a correctness assertion is true. related collections of test cases are organized into test suites, usually together testify to the correctness of a particular component or component aspect.

R: defensive instrumentation

tests and logging, defensive instrumentation that should be left in the code includes code that checks for important errors (high likelihood or high impact) and ensures graceful handling. Make sure that the associated error messages are useful ones. It may also be useful to leave (low performance impact) instrumentation and logging that might be useful in diagnosing problems that occur after the code has shipped. Remove code that was only present for testing, logs messages that would confuse the user, or that causes hard crashes (e.g. asserts).

BD: regression testing

tests design to make sure the software hasn't taken a step backwards, and that changes haven't introduced new defects. Must run same tests every time, keep old and add new as product matures.

CCD: common closure principle

the classes in a package should be closed together against the same kinds of changes, a change that affects a package affects all classes in that package. If the implementation of a class depends on the implementation of another, they should be delivered in a single package. Avoid string inter package coupling

PST: project scheduling

the process of determining when what work can be done, and using which resources. There are a several different approaches that can be taken to this process: bottom up .(add up the estimates for each task), backwards (look at the planned completion date, and figure out when each task must be completed in order for the schedule to work), top-down (divide the available time into phases, proportional in size to the estimated difficulty), resource availability (identify the critical resources, and schedule the work that requires those resources around their availability), dependency ordered (pre-requisites tasks must be completed before tasks that depend on them can begin), risk ordered (high risk items should be done sooner, to allow maximum time to deal with problems discovered in the process), priority ordered (the tasks that must be completed should be scheduled before optional tasks, so that if we run short on time, it will be optional tasks that go un-finished).

BD: diagnostic instrumentation

there isn't always enough information, program can produce little diagnostic output, stack/execution traces are useless if the defected failure occurs long after the actual fault, execution in debugger is not practical if you don't know what code to look at, or if suspect code is called very frequently. Add diagnostic instrumentation to the program to log all conceivably interesting events, attempt to reproduce failures with the new version, and potentially change symptoms of failure.

RAD: program into your language

thinking what you want to do, then assessing how to accomplish the objective with programming tools at your disposal. Benefitting from programming conventions to help you steer clear of hazardous features. Typically only minor concessions need to be made to the environment.

ITS: testing and integration

three principles of testing that bear on integration strategy. Test new code as soon and thoroughly as possible. Test component interfaces as soon and thoroughly as possible. Add new functionality and then test it in small, progressive increments. Comes from the idea that it is easier to test small amounts of code than large amounts, and easier to test a change when pre-existing code was known to be working. Can either use a harness, or exercise the new code in a complete system. As a result, for each component, we must ask ourselves how we will exercise it in its early stages of development, and we want to provide an exercise framework for new components as they are added as part of integration strategy. The best way to shake out misunderstandings of interface specifications is to combine the interacting components and exercise them together.

TERA: work estimates

times and resources required for each task, and are usually prepared by engineering. Schedules are based on estimates. Estimates are a pre-condition for any non-trivial software project, in that time and cost are key considerations in deciding whether or not to pursue a project. Good estimates lay the foundation for successful projects, by identifying the required resources and enabling us to predict how long it will take to develop the required functionality. Bad estimates can ensure the failure of a project

TERA: types of risk

typical software risk categories include dependencies (cannot be controlled, should use mitigation strategies, working with source to maintain visibility, includes problems with customer-furnished items or information, internal and external subcontractor relationships, inter-component or inter-group dependencies, availability of trained, experienced people, and reuse from one project to the next), requirements issues (build wrong project or right project badly, and come from lack of clear product vision, lack of agreement on product requirements, unprioritized requirements, new market with uncertain needs, new applications with uncertain requirements, rapidly changing requirements, ineffective requirements changing management process, and inadequate impact analysis of requirements changes), management issues, lack of knowledge, and unavailability of development or testing equipment and facilities, inability to acquire resources with critical skills, turnover of essential personnel, unachievable performance requirements, problems with language translations and product internationalization, and finally technical approaches that may not work.

TERA: risk identification

unclear/unstable requirements, poorly understood technical problems, staff size, skills, experience, tools , complexities of the domain and platform

STP: scenario-based testing

use cases/user stories were tied to real-world problems, captured ways the customers need to use to product, and are easily validated, and can become the basis for test cases. In order to develop a scenario we describe a situation in which the customer would need to use the product, enumerate the steps in the process, the associated product interactions, the actions the user would take, and their expectations at each point. Finally, you script that set of operations and verification of the expected results at each juncture. Scenario based test cases are: relatively easy to develop, complex, and therefore likely to find interesting problems, realistic representations of actual use, quite different from and complementary to traditional one assertion at a time unit test cases.

TTC: static complexity

valuable as a basis for comparison (module A is more complex than module B), limited use for estimating test cases (branch and code paths are not equal to execution paths), ignores major sources of complexity including asynchronous interactions, thread serialization, fallibility of called services, and coupling through dynamic data. Static complexity can be viewed as a function of the structure of the system, connective patterns, variety of components and the strengths of interactions.

PST: PNR effort/time curve

very project has an optimal staffing level, and (correspondingly) an optimal time in which it will be completed. Going significantly above or below the optimal staffing level will reduce work efficiency ... and there may be a point beyond which adding people actually delays the project. There are 4 zones from left to right: impossible (project cannot be accomplished in less time than this, no matter how many people are applied to the problem), "Haste makes waste" zone (adding people does accelerate delivery, but not in proportion to the added effort. Each additional person added to the project lowers our productivity), linear range (efficient staffing, and within this range it is possible to trade man-power for time), and an under-staffed/over-staffed zone (productivity is dropping).

ITS: train-model integration

view of technical innovation to do no harm, and schedule releases and release on schedule. Solaris train model says that a project cannot integrate into the product until it is ready to ship. If a product is integrated, and found not to be of acceptable quality, it is immediately thrown off of the release train. The reward for this strategy is that it should be possible to create a new, high quality, Solaris release at almost any time.

BD: psychological issues

we assume we know things that we think, and see our work as it was intended not as how it actually is. We also assume we are better than we are, suspect problems come from elsewhere, believing in our abilities and methodology and not considering ourselves as error prone. This blinds us to many hypotheses, impairing coding and debugging.

CCD: refactoring vs up front design

we need adequate design before coding, but how good can the design be? After we have written and tested our code we better understand it, can observe performance, and see other methods of simplifications. If you plan on periodic refactoring, it is more valuable than more up-front design, allows us to assess generality, extensibility, and optimization

TERA: risk mitigation

what can we do to reduce its likelihood, what can we do to reduce its expected impact

TERA: risk monitoring

what danger signs should we watch for, how will we respond when problem happens

PST: causes of slippage

• poor or unstable requirements • unrealistic schedules (poor estimates) • "Scope Creep" (new input, lose focus) • unanticipated construction problems • unanticipated quality problems • unanticipated integration problems • external dependency issues • unplanned distractions

PST: SCRUM points

• relatively easy to estimate - developer-convenient unit: "best-case days" - a measure of work, not a delivery date - less misleading and arbitrary than "dollars" - small task estimation is easy and accurate • excellent progress tracking - small tasks enable fine-grained tracking - a more linear measure of progress • well correlated to product progress - only accepted features earn points

TTC: Test Driven Development

Formal methodology (write test cases, run them and confirm failure, write code to implement functionality, re-run tests and confirm success, and check in the new results and code). As a general approach, thoroughly test each feature as you write it, do all testing automatically, and save and accumulate test cases.

TTC: characteristics of a good test

Fundamental characteristics of a good test are that it is dispositive and determines correctness, valid and the answers are correct, and deterministic and yield consistent results. The usability characteristics are isolated, independent test cases, self-contained tests that bring what they need, and automated tests that run without assistance.

R: error detection principles

General principles of error detection include being close as possible to errors first appearance, general checks that find many problems, simple w/high probability of correctness, low overhead as it should execute often

M: style tools

IDEs (Integrated development environments provide word-processing, compilation and error detection, integration with source code control, build, test, and debugging tools, compressed or outline views of programs, jumping around, language-specific formatting, brace matching, interactive help, templates, smart indenting, automated transforms or refactoring, simultaneous editing, search strings, macros), multiple-file string searching and replacing, diff tools to compare 2 files, merge tools, source-code beautifiers, interface documentation tools to extract documentation from source-code files, templates to streamline tasks, cross-reference tools, class hierarchy generators.

R: error prioritization

assess likelihood, impact, and rank all errors on a priority ladder from to <common, serious> to <rare, minor>. Likelihood is common, occasional, or rare, and is based on prior experience, estimated transaction risks, and intrinsic risks. Impact ranges from serious (loss of service or data) to moderate (a few brief and isolated failures) to minor (the user can easily work around the problems).

CCD: packages

collection of classes aggregated together into a group, added and removed together as a group. Provide well defined functionality, come in a wide range of formats, and may be supported by package management software. packages are intended to be distributed, raising stability requirements.

CCD: specifications

complete descriptions of interfaces and behavior that a component must have in order to correctly perform its role in a system. Must be specific and measurable, components that meet specifications are acceptable. They are the step between requirements and design (component specific requirements, basis for component design). Functional (written from users POV, enumerate capabilities and interfaces) vs Technical (written to guide implementer, capture design decisions and suggestions)

BD: minimal failure cases

complex or subtle failures, occurring after millions of operations, depending on environmental factors, and can occur in different places. A minimal failure case is found by finding a simple case that fails solidly, isolating the contributing factors, and finding a minimal combination that reliably fails, making the problem easier to reproduce and thus debug.

R: correctness and robustness

correctness is the degree to which a component is free of defects, where robustness is the ability of the component or system to avoid failure in the face of defects. Complementary properties, correct systems are free of defects, while robust systems may have defects that avoid failure. robustness lowkey more useful

RAD: reasons to define routines

creating useful private pseudo classes (better abstractions, may derive private sub-classes), detail encapsulation (moving complex sequences out of main code, segregating portable and non-portable code, hiding/wrapping global data structures), centralizing a recurring computation (one copy of an often repeated code sequence, enable interception of key operations).

R: error diagnosis and containment

diagnose the source and where the error originates, the impact of the error, whether it is recoverable, degraded, or failed, and the persistence of the error, whether it is transient, chronic, or permanent. diagnosis at design time allows for containment. Containment involves which components have actually failed and which components have been affected, and ow to minimize the impact of the error as dictated mostly by architecture.

R: error testing

don't trust your error detection, provide end-to-end testing, including network links, front-end, system, application. General health monitoring will find a wide range of failures.

DP: algorithmic patterns

iterator (goal is to enumerate all elements, and obtain references to each without understanding aggregation structure, abstract iterator interface, issues ensuring they are multi-thread safe and with effect of aggregation changes) observer (client to server notifications, server has no knowledge of clients, implement a call back interface where server implements register-callback method, client implements callback methods, and clients call server so that it can distribute events calling callback()), bridge/strategy (decoupling clients and implementations, since clients don't depend on implementing class, interchangeable implementations are permitted, and client and implementation evolve separately, client only depends on interface and is unaware of implementing class, all strategies MUST take same client inputs), visitor (enumerating more complex structures, walking a composite heterogeneous object performing a wide range of operations without understanding aggregation structure, walker with per-node callback allows one walker to serve many purposes, but visitor must support all node types and visits give no structural info as they are per-element).

RAD: routine length

large percentage of routines are short accessor routines, routines should otherwise be allowed to grow organically up to 100-200 lines. over 200 lines should be careful, reach limit of understandability

BD: bug tracking system

list of open tasks for developers, work that needs to be done and communication between developers and users. Give current status of product/development, what known problems there are and their status. Can also be support databases with known problems and work-arounds, or project management databases with defect detection rates, fix rates, number of problems discovered, and regression and not-a-bug rates.

TTC: boundary value analysis

looking at specified parameter domains, and selecting values near the edges, and clearly outside. It is a non-arbitrary selection process, that is entirely based on the specifications, meaningfully measures compliance with a small number of test cases, and (in practice) turns up a fair number of problems.

RAD: macros & in-lines

macros are used to make a sequence of computing instructions available to the programmer as a single program statement. expanded at compile time. The single statement replaces the code, but calls for extra considerations including fully parenthesizing, surrounding multi-statement macros with curly braces. macros can have multiple statements, so it needs to be treated as such. Inline routines allow programmers to treat routine as a routine at code-writing time, but the compiler will convert each instance into inline code at compile time.

CCD: component

modular, deployable, and replaceable part of a system, that encapsulates implementation, and exposes a set of interfaces. It can be a defined part of a larger system, and can be added or removed. It contributes to overall working of the system, and the functionality is defined by an interface spec

M: elements of readability/coding style

module organization (order in which we describe and define routines and variables), visual layout (consistent visual metaphors and white-space to delimit functional units), naming conventions for clarity, commenting to further accentuate structure and serve as a guide.

M: coding standards

naming conventions (generation, use of case, prefixes, suffixes), usage conventions (e.g. defines, include file processing), commenting conventions (standard module preamble, standard routine preamble), formatting conventions (indentation and commenting style)

DP: architecture patterns

pipe/filter (stream data processing and transformation, pipeline of generators, filters, transformers, independent processing elements, power from composing independent elements, reusability/replaceability of elements, but no feedback between stages), layered/events? (organized hierarchically, each layer providing service to the layer above it and serving as a client to the layer below. In some layered systems inner layers are hidden from all except the adjacent outer layer, except for certain functions carefully selected for export. Thus in these systems the components implement a virtual machine at some layer in the hierarchy. (In other layered systems the layers may be only partially opaque.) The connectors are defined by the protocols that determine how the layers will interact), repositories (e two quite distinct kinds of components: a central data structure represents the current state, and a collection of independent components operate on the central data store. Interactions between the repository and its external components can vary significantly between systems, continuous series of distinct operation against progressively evolving state, small modular operations that are easy to test and debug, but is limited in scalability and corruptibility), client/server (unique, large or expensive resource to be shared by many changing clients, resource is owned by single server and clients get access through request messages, flexible , general, heterogeneous, limited by discovery, new protocols, and extra components)


Ensembles d'études connexes

International Business Law Chapter 15

View Set

Varacolis Mental Health Chapter 34

View Set

MOET - Grade 8 - Unit 1 - Grammar - Adjective order (Exercises) [live]

View Set

Adventurous Alice SAT Word List 226-250

View Set

research methods exam 3 (chapter 7,8,&10)

View Set