QA assessment l3

¡Supera tus tareas y exámenes ahora con Quizwiz!

SINGLE page app

###Pros of the Multiple-Page Application: It's the perfect approach for users who need a visual map of where to go in the application. Solid, few level menu navigation is an essential part of traditional Multi-Page Application. Very good and easy for proper SEO management. It gives better chances to rank for different keywords since an application can be optimized for one keyword per page. ###Cons of the multiple-page application: There is no option to use the same backend with mobile applications. UPDATE 27.09.2017: Back then, when I was writing this article, I didn't have much experience with backend and with mobile apps. It's obvious for me now that you can use the same backend for both. And I'd like to thank all the readers who pointed that out. Frontend and backend development are tightly coupled. The development becomes quite complex. The developer needs to use frameworks for either client and server side. This results in the longer time of application development.

Virtual DOM advantages

5. Efficiency - React's Virtual DOM provides a more efficient way of updating the view in a web application. Each time the underlying data changes in a React app, a new Virtual DOM representation of the user interface is created. Rendering the Virtual DOM is always faster than rendering the UI in the actual browser DOM. 4. Greater Performance - Updating the Virtual DOM in React.js always increases the performance. With the setStare() method, React.js creates the whole virtual DOM from scratch. Creating a whole tree is very fast and so it enhances the performance considerably. One may argue that re-rendering the entire Virtual DOM every time there's a possibility that something has changed would be wasteful. The fact to consider is that React is keeping two Virtual DOM trees in memory. 3. Simplicity - From a programmer's perspective, React and its Virtual DOM are simpler than most of the other approaches to making JavaScript reactive. Pure JavaScript code updates React components while React updates the DOM. The data binding is not intertwined with the application. 2. CPU-Intensive - The Virtual DOM adds a layer of scripting to the optimizations the browser carries out to make the DOM manipulations transparent to the developer. Compared to all other methods of updating the DOM, this method of incorporating an additional layer of abstraction makes React much more CPU-intensive. 1. Optimized Memory Usage - The Virtual DOM makes optimized use of the memory compared to other systems because it doesn't hold observables in the memory. Due to the Virtual DOM, each change in the data model can trigger a complete refresh of the virtual user interface. This is very different from the systems used by other libraries that based upon the state of the document, updates them when necessary.

Repaint/Reflow

A repaint occurs when changes are made to an elements skin that changes visibly, but do not affect its layout. Examples of this include outline, visibility, background, or color. According to Opera, repaint is expensive because the browser must verify the visibility of all other nodes in the DOM tree. A reflow is even more critical to performance because it involves changes that affect the layout of a portion of the page (or the whole page). Examples that cause reflows include: adding or removing content, explicitly or implicitly changing width, height, font-family, font-size and more.

ES2019 features

Array.flat() returns a new array with any sub-array(s) flattened. A call to Array.flat() without any arguments will only flatten one-level deep. An optional depth argument can be provided or it can just be called consecutively. String.trimStart() & String.trimEnd() String.trimStart() can be used to trim white space from the start of a string. Optional Catch Binding Optional catch binding allows developers to use try/catch without the error parameter inside the catch block. Object.fromEntries() let entries = new Map([["name", "john"], ["age", 22]]); console.log(Object.fromEntries(entries)); // { name: 'john', age: 22 } Symbol.description The read-only description property is a string returning the optional description of Symbol objects.

why are design patterns so good

Benefits of Design Patterns Inspiration Patterns don't provide solutions, they inspire solutions. Patterns explicitly capture expert knowledge and design tradeoffs and make this expertise widely available. Ease the transition to object-oriented technology. Patterns improve developer communication Pattern names form a vocabulary. Help document the architecture of a system Enhance understanding. Design patterns enable large-scale reuse of software architectures Design patterns have two major benefits. First, they provide you with a way to solve issues related to software development using a proven solution. The solution facilitates the development of highly cohesive modules with minimal coupling. They isolate the variability that may exist in the system requirements, making the overall system easier to understand and maintain. Second, design patterns make communication between developers more efficient. Software professionals can immediately picture the high-level design in their heads when they refer the name of the pattern used to solve a particular issue when discussing system design.

What is DOM? What are the main weak sides? How does Virtual DOM improve the work with DOM?

Document Object Model DOM is a programming interface for HTML and XML documents. It represents the page so that programs can change the document structure, style, and content. The DOM represents the document as nodes and objects. Operation are time consuming, virtual DOM speeds them up.

Advantages of Generators

Lazy Evaluation This is an evaluation model which delays the evaluation of an expression until its value is needed. That is, if the value is not needed, it will not exist. It is calculated on demand. Memory Efficient A direct consequence of Lazy Evaluation is that generators are memory efficient. The only values generated are those that are needed. With normal functions, all the values must be pre-generated and kept around in case they need to be used later. However, with generators, computation is deferred.

redux vs flux differences

MVC Follows the bidirectional flow. No store. Controller handles entire logic Flux Follows the unidirectional flow. Includes multiple stores. Store handles all logic Redux Follows the unidirectional flow. Includes single store. Reducer handles all logic

Pros and cons of React

PROS #Component reuse #Virtual DOM #Prompt rendering. #Testable. React's native tools are offered for testing, debugging code. #SEO-friendly #JSX #Up to date #Suitable for highload systems CONS #Слишком часто обновляется #Learning curve #Not using isomorphic approach to exploit application leads to search engines indexing problems. #Some devs dislike JSK

Pros and cons of 1-way data binding

PROS #makes the code very stable. CONS #Two-way data binding simplifies some parts of the process.

css-препроцессоры

PROS #облегчает написание чистого кода, меньше кода писать #variables, mixins #объединение нескольких файлов #Nested Syntax #Можно делать цвета темнее или светлее CONS #additional time to learn #Code has to be compiled #debugging problems

BEM pros and cons

Pros In addition to fixing CSS inheritance and specificity issues, incorporating BEM into your projects also brings in the following benefits: Better HTML/CSS decoupling By avoiding use of HTML element names in CSS selectors, BEM makes CSS code readily portable to a different HTML structure. Better CSS performance Browsers evaluate CSS selectors right to left and BEM encourages frontend developers to create a flat CSS structure with no nested selectors. So the less browsers have to evaluate, the faster they can render. No CSS conflicts BEM avoids CSS conflicts by using unique contextual class names for every block, element and modifier combination. Ease of code maintenance BEM's modular approach encourages developing independent modules of code and hence easier to maintain & update without affecting other modules in the project. Cons As with everything, BEM also comes with a few downsides but these can easily be mitigated by implementing a few extra steps while configuring your frontend build process. File size bloating It is known that BEM can bloat file sizes with the longer CSS class names, but this can easily be overcome by minifying and gzipping production code. Ugly HTML code While the overall HTML code does 'look' ugly with BEM class names, the visitor of the website or application will not look in the source too often, so it is not really an issue.

React debugger

React dev tools

What is Redux? Could you explain base concepts? Compare with Flux

Redux is a state container. Flux Architecture Store/ Stores: Serves as a container for the app state & logic Action: Enables data passing to the dispatcher View: Same as the view in MVC architecture Dispatcher - Coordinates actions & updates to storesFlux In the Flux architecture, when a user clicks on something, the view creates actions. Action can create new data and send it to the dispatcher. The dispatcher then dispatches the action result to the appropriate store. The store updates the state based on the result and sends an update to the view. Redux Architecture Redux is a library, which implements the idea of Flux but in quite a different way. Reducer: Logic that decides how your data changes exist in pure functions Centralized store: Holds a state object that denotes the state of the entire appRedux Architecture In Redux architecture, application event is denoted as an Action, which is dispatched to the reducer, the pure function. Then reducer updates the centralized store with new data based on the kind of action it receives. Store creates a new state and sends an update to view. At that time, the view was recreated to reflect the update.

SOLID

S — Single responsibility principle O — Open/closed principle In programming, the open/closed principle states that software entities (classes, modules, functions, etc.) should be open for extensions, but closed for modification. L — Liskov substitution principle More generally it states that objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program. I — Interface segregation principle In programming, the interface segregation principle states that no client should be forced to depend on methods it does not use. Put more simply: Do not add additional functionality to an existing interface by adding new methods. Instead, create a new interface and let your class implement multiple interfaces if needed. D - Dependency inversion principle High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details. Details should depend on abstractions.

What CSS preprocessors do you know or use? Would you please compare them?

Sass Pros Lowest barrier to entry - you can harness some of the most powerful features by simply learning a couple of new symbols New collaborators should have no trouble picking it up. LibSass (which decouples Sass from Ruby) is fast, portable and easy to build By far the most engaged community, with plenty of support and resources Cons As with any framework, there's a danger you'll become reliant on this approach, and not fully grasp the underlying language LESS Pros Written in JavaScript, which makes setup easy GUI apps can watch and compile code for you (Crunch, SimpLESS, WinLess, Koala, CodeKit, LiveReload or Prepros) Very detailed documentation and a very active community Easy to find help or previous examples IDEs such as VS Code, Visual Studio and WebStorm support Less either natively or through plugins Cons Uses @ to declare variables, but in CSS, @ already has meaning (it's used to declare @media queries and @keyframes) which can cause confusion Time might be better spent learning Sass, due to wider use Relies entirely on mixins rather than allowing you to use functions that can return a value, which can result in slightly restricted use cases Stylus Pros Hugely powerful built-in functions Can do much more computing and 'heavy-lifting' inside your styles Written in Node.js, which is fast and fits neatly with a 2018 JavaScript stack 'Pythonic' syntax looks a lot cleaner and requires fewer characters Cons Too forgiving, which can lead to confusion Doesn't seem to be in very active development

how do you test react app?

Use JEST coverage tool snapshot tests Functional testing with enzyme and jest End to End Testing End-to-end testing is a technique used to test whether the flow of an application right from start to finish is behaving as expected.

Higher Order Components

a higher-order component is a function that takes a component and returns a new component. const EnhancedComponent = higherOrderComponent(WrappedComponent); HOC doesn't modify the input component, nor does it use inheritance to copy its behavior. Rather, a HOC composes the original component by wrapping it in a container component. A HOC is a pure function with zero side-effects.

generators usage--- плохой ответ

for loops which need to be paused and resumed at a later date infinitely looping over an array and having it reset to the beginning once it's done creating iterables to use in for of loops from non-iterable objects using [Symbol.Iterator]

Differences beetwen different test frameworks

https://blog.bitsrc.io/7-react-testing-libraries-you-should-know-b20ca97422a4

The Benefits of Using Static typing

https://codeburst.io/strict-types-typescript-flow-javascript-to-be-or-not-to-be-959d2d20c007 The Benefits of Using Static typing On-Time Detection of Bugs/Errors Through Static type testing, it's easy to validate that the mentioned invariants are true without even running the program. Plus, instead of detecting errors during runtime, it gives developers freedom to catch them before that. Since the type checker inform you about the bugs/errors in the beginning, it's absolutely easy and comparatively inexpensive than determining about the errors once the code has been forwarded to your clients. Next operations are valid in the javascript, but you would get exceptions in the typescript, for example: It Communicates the Purpose of the Function Types act as an active, living documentation for the coder as well as the other users of the program. Through Static typing, it's easier for you to comprehend the purpose of the function, the type of data it accepts as an input or what to expect in return. Well, if you think that including comments could serve the similar purpose, you're not wrong. But since code comments are kind of verbose, they disturb the overall structure of your code. In addition to that, it totally depends on the developer to write good, complicated, or no comments at all. Let's take a look on the jsdoc: It Scales Down Complex Error Handling We better check that s is string, while y is a regular expression. Without static types, you need tons of code for a small bit of functionality. Through static types, it's easier for you to avoid complex sets of coding to handle errors or bugs. It Distinguishes Between Data and Behavior Since static types clearly distinguish between data and behavior, it makes it achievable for developers to be straightforward about their expectations and more precisely explain their purpose. By doing so not only mitigates pressure and ambiguity but also gives them mental clarity and transparency during coding. Wipes Out Runtime Type Errors Type errors at runtime can be disastrous. Static types help in removing such errors, giving programmers freedom to enjoy zero errors at runtime. For instance, the below code will work in javascript, but will flag a compilation error about the fact that signature doesn't exist. Domain Modeling Tool Combining both data and its behavior, domain modeling is certainly one of the best features of static types. On one hand, it help developers check the union cases and instantly resolve how the app environment is put together, and on the latter, reduces administrative complexity: App.subapp.somefunction The Disadvantages of Using Static Types Similar to any other programming language, static type checking isn't free from flaws. To make a sensible decision, it's therefore important that we recognize and accept them. Read on to unearth some of the disadvantages that needs your consideration before opting for static typing for your projects. It May Confuse the Budding Developers One primary reason JavaScript is so popular amongst web developers is that they don't need to get their hands on the complete type system before getting started. This is specifically important for the budding programmers and coders. Including types to the process can confuse them and will make the learning process steeper than expected. Demands Lots of Time and Practice Mastering types isn't as easy as it sounds. It takes great deal of practice and time to understand the best ways to specify types within a program. Plus, only an experienced programmer is capable of determining whether or not adding types would meet the purpose. It Gives Developers a Deceitful Sense of Security Well, if you think strict typing can do wonders for you when it comes to bug density, it's time now to seriously rethink your approach. According to the article 'The broken promise of static typing' published by Daniel Lebrero, "With that in mind, I tried to find some empirical evidence that static types do actually help avoid bugs. Unfortunately the best source that I found suggests that I am out of luck, so I had to settle for a more naïve approach: searching Github. Using static types in JavaScript — Yay or Nay? Let's be honest! Static types have added a complete new dimension to the world of programming. Although it takes plenty of time and effort to master the technicalities that comes with types, the security and preciseness are the factors that make these flaws less intensive for me. However, it all boils down to your personal preferences and the type of project you're working on. In conclusion, you can opt for static type checking if: Your project is big and complex If you have a team working on a project with huge codebase, TypeScript can help you sidestep plenty of errors. A developer could come out with breaking changes at any moment. But, if your project isn't complex, it just makes no sense to make it one by including unnecessary type annotations (With TypeScript or other static types, at least expect a 30% increase to the total code size). A huge team is responsible to handle the task TypeScript is a great option if a huge team is responsible to handle the task. Plus, it's also very important that the members are already familiarized with strongly type languages such as Java or C#. You may require to refactor the program in the long run Use TypeScript if you think there will be a need to refactor program in the future. Don't carry unnecessary burden if you have a short-term assignment in hand. When your Team is Familiar with Statically Typed Language Strictly typed languages like TypeScript makes a perfect choice if you or your teammates are already familiar with strongly typed languages like Java or C#. Since, the TypeScript was developed by the same developer who invented C#, both these languages share identical syntax. You're Looking For a Babel Substitute To recall, Microsoft used to take conventional tools and include proprietary aspects to them. For instance, Java leading to J++. TypeScript is totally the identical approach. Noticeably, this isn't American company's first fork of JS as back in 1996, Microsoft has already forked it to develop JScript. Although, it's a rare case, but it's still methodologically conceivable to transpile ES6 code to ES5 by employing the TS transpiler. That's probably because ES6 is basically a subset of TS and the relevant transpiler produces ES5 code. TypeScript transpiler produces fairly understandable code and that's perhaps one of the biggest reasons why Angular 2 team selected it over Google's popular Dart language. In addition to that, TS also includes some other amazing features that are not present in ES6. When a Library Suggests TypeScript For those using Angular 2 or any other framework that suggests TypeScript should give it a try. Despite the fact that TS can make use of all the available JS libraries, if you're seeking exceptional syntax errors, you'll require to include the type definitions externally. Avoid using types if: The needed program isn't critical and progressive in nature If the project you're working on isn't long-term and progressive in nature, it makes no sense to add needless type annotations to your codebase as it will only add to the the length of your code, making it look messy and complicated. You're working alone on the project Do not opt for TypeScript or Flow if you're working alone on the project. These languages best caters to large teams and extensive projects. Ahead of deciding your preferred framework, it's advisable to unearth the scope and nature of your work and take decision accordingly. Your requirements are limited or the task is simple in nature As compared to static typing, setting up anything in JavaScript is easy. A big 'No' to TypeScript or other static type languages if the nature of your task isn't convoluted or long-term. You want to sidestep possible performance penalties Similar to Babel, the TS transpiler include features that need creating additional code and regardless of how effective your transpiler is, it can't outweigh the performance of an experienced programmer. Therefore, the performance penalty is possibly insignificant if set side by side with the type mechanism. But there are cases where fraction of a second counts, and in those instances transpilation of any form isn't suggested. Since all the type checking takes place at transpile time, it's worth mentioning that TypeScript doesn't include any additional coding for runtime type scouting. You want to sidestep annoying edge cases The presence of disturbing edge cases is a serious flaw associated with strictly typed languages. With Sourcemaps, it's simpler to debug TypeScript, but keep in mind the status quo isn't indefectible. In addition to that, debugging 'this' keyword is another trouble. You want to avoid extra costs If you select TypeScript, you will be required to do some additional bookkeeping. Again, it's worthless if you're working on a pure JavaScript project.

Configure build process yourself or choose a ready-to-use solution

https://dev.to/netlify/choosing-a-javascript-build-tool-to-config-or-not-config-2ia8

What context have "this" in different cases. Arrow functions vs usual. Change context

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/this In the global execution context (outside of any function), this refers to the global object whether in strict mode or not. nside a function, the value of this depends on how the function is called. Simple call Since the following code is not in strict mode, and because the value of this is not set by the call, this will default to the global object, which is window in a browser. function f1() { return this; } // In a browser: f1() === window; // true // In Node: f1() === global; // true In strict mode, however, if the value of this is not set when entering an execution context, it remains as undefined, as shown in the following example: function f2() { 'use strict'; // see strict mode return this; } f2() === undefined; // true In the second example, this should be undefined, because f2 was called directly and not as a method or property of an object (e.g. window.f2()). This feature wasn't implemented in some browsers when they first started to support strict mode. As a result, they incorrectly returned the window object. To set the value of this to a particular value when calling a function, use call(), or apply() as in the following examples. Example 1 // An object can be passed as the first argument to call or apply and this will be bound to it. var obj = {a: 'Custom'}; // This property is set on the global object var a = 'Global'; function whatsThis() { return this.a; // The value of this is dependent on how the function is called } whatsThis(); // 'Global' whatsThis.call(obj); // 'Custom' whatsThis.apply(obj); // 'Custom' Example 2 function add(c, d) { return this.a + this.b + c + d; } var o = {a: 1, b: 3}; // The first parameter is the object to use as // 'this', subsequent parameters are passed as // arguments in the function call add.call(o, 5, 7); // 16 // The first parameter is the object to use as // 'this', the second is an array whose // members are used as the arguments in the function call add.apply(o, [10, 20]); // 34 Note that in non-strict mode, with call and apply, if the value passed as this is not an object, an attempt will be made to convert it to an object using the internal ToObject operation. So if the value passed is a primitive like 7 or 'foo', it will be converted to an Object using the related constructor, so the primitive number 7 is converted to an object as if by new Number(7) and the string 'foo' to an object as if by new String('foo'), e.g. function bar() { console.log(Object.prototype.toString.call(this)); } bar.call(7); // [object Number] bar.call('foo'); // [object String] The bind method ECMAScript 5 introduced Function.prototype.bind(). Calling f.bind(someObject) creates a new function with the same body and scope as f, but where this occurs in the original function, in the new function it is permanently bound to the first argument of bind, regardless of how the function is being used. function f() { return this.a; } var g = f.bind({a: 'azerty'}); console.log(g()); // azerty var h = g.bind({a: 'yoo'}); // bind only works once! console.log(h()); // azerty var o = {a: 37, f: f, g: g, h: h}; console.log(o.a, o.f(), o.g(), o.h()); // 37,37, azerty, azerty Arrow functions In arrow functions, this retains the value of the enclosing lexical context's this. In global code, it will be set to the global object: var globalObject = this; var foo = (() => this); console.log(foo() === globalObject); // true Note: if this arg is passed to call, bind, or apply on invocation of an arrow function it will be ignored. You can still prepend arguments to the call, but the first argument (thisArg) should be set to null. // Call as a method of an object var obj = {func: foo}; console.log(obj.func() === globalObject); // true // Attempt to set this using call console.log(foo.call(obj) === globalObject); // true // Attempt to set this using bind foo = foo.bind(obj); console.log(foo() === globalObject); // true No matter what, foo's this is set to what it was when it was created (in the example above, the global object). The same applies to arrow functions created inside other functions: their this remains that of the enclosing lexical context. // Create obj with a method bar that returns a function that // returns its this. The returned function is created as // an arrow function, so its this is permanently bound to the // this of its enclosing function. The value of bar can be set // in the call, which in turn sets the value of the // returned function. var obj = { bar: function() { var x = (() => this); return x; } }; // Call bar as a method of obj, setting its this to obj // Assign a reference to the returned function to fn var fn = obj.bar(); // Call fn without setting this, would normally default // to the global object or undefined in strict mode console.log(fn() === obj); // true // But caution if you reference the method of obj without calling it var fn2 = obj.bar; // Calling the arrow function's this from inside the bar method() // will now return window, because it follows the this from fn2. console.log(fn2()() == window); // true In the above, the function (call it anonymous function A) assigned to obj.bar returns another function (call it anonymous function B) that is created as an arrow function. As a result, function B's this is permanently set to the this of obj.bar (function A) when called. When the returned function (function B) is called, its this will always be what it was set to initially. In the above code example, function B's this is set to function A's this which is obj, so it remains set to obj even when called in a manner that would normally set its this to undefined or the global object (or any other method as in the previous example in the global execution context). As an object method When a function is called as a method of an object, its this is set to the object the method is called on. In the following example, when o.f() is invoked, inside the function this is bound to the o object. var o = { prop: 37, f: function() { return this.prop; } }; console.log(o.f()); // 37 Note that this behavior is not at all affected by how or where the function was defined. In the previous example, we defined the function inline as the f member during the definition of o. However, we could have just as easily defined the function first and later attached it to o.f. Doing so results in the same behavior: var o = {prop: 37}; function independent() { return this.prop; } o.f = independent; console.log(o.f()); // 37 This demonstrates that it matters only that the function was invoked from the f member of o. Similarly, the this binding is only affected by the most immediate member reference. In the following example, when we invoke the function, we call it as a method g of the object o.b. This time during execution, this inside the function will refer to o.b. The fact that the object is itself a member of o has no consequence; the most immediate reference is all that matters. o.b = {g: independent, prop: 42}; console.log(o.b.g()); // 42 this on the object's prototype chain The same notion holds true for methods defined somewhere on the object's prototype chain. If the method is on an object's prototype chain, this refers to the object the method was called on, as if the method were on the object. var o = {f: function() { return this.a + this.b; }}; var p = Object.create(o); p.a = 1; p.b = 4; console.log(p.f()); // 5 In this example, the object assigned to the variable p doesn't have its own f property, it inherits it from its prototype. But it doesn't matter that the lookup for f eventually finds a member with that name on o; the lookup began as a reference to p.f, so this inside the function takes the value of the object referred to as p. That is, since f is called as a method of p, its this refers to p. This is an interesting feature of JavaScript's prototype inheritance. this with a getter or setter Again, the same notion holds true when a function is invoked from a getter or a setter. A function used as getter or setter has its this bound to the object from which the property is being set or gotten. function sum() { return this.a + this.b + this.c; } var o = { a: 1, b: 2, c: 3, get average() { return (this.a + this.b + this.c) / 3; } }; Object.defineProperty(o, 'sum', { get: sum, enumerable: true, configurable: true}); console.log(o.average, o.sum); // 2, 6 As a constructor When a function is used as a constructor (with the new keyword), its this is bound to the new object being constructed. While the default for a constructor is to return the object referenced by this, it can instead return some other object (if the return value isn't an object, then the this object is returned). /* * Constructors work like this: * * function MyConstructor(){ * // Actual function body code goes here. * // Create properties on |this| as * // desired by assigning to them. E.g., * this.fum = "nom"; * // et cetera... * * // If the function has a return statement that * // returns an object, that object will be the * // result of the |new| expression. Otherwise, * // the result of the expression is the object * // currently bound to |this| * // (i.e., the common case most usually seen). * } */ function C() { this.a = 37; } var o = new C(); console.log(o.a); // 37 function C2() { this.a = 37; return {a: 38}; } o = new C2(); console.log(o.a); // 38 In the last example (C2), because an object was returned during construction, the new object that this was bound to simply gets discarded. (This essentially makes the statement "this.a = 37;" dead code. It's not exactly dead because it gets executed, but it can be eliminated with no outside effects.) As a DOM event handler When a function is used as an event handler, its this is set to the element the event fired from (some browsers do not follow this convention for listeners added dynamically with methods other than addEventListener()). // When called as a listener, turns the related element blue function bluify(e) { // Always true console.log(this === e.currentTarget); // true when currentTarget and target are the same object console.log(this === e.target); this.style.backgroundColor = '#A5D9F3'; } // Get a list of every element in the document var elements = document.getElementsByTagName('*'); // Add bluify as a click listener so when the // element is clicked on, it turns blue for (var i = 0; i < elements.length; i++) { elements[i].addEventListener('click', bluify, false); } In an inline event handler When the code is called from an inline on-event handler, its this is set to the DOM element on which the listener is placed: <button onclick="alert(this.tagName.toLowerCase());"> Show this </button> The above alert shows button. Note however that only the outer code has its this set this way: <button onclick="alert((function() { return this; })());"> Show inner this </button> In this case, the inner function's this isn't set so it returns the global/window object (i.e. the default object in non-strict mode where this isn't set by the call).

What is Dependency Injection in general ? Benefits we gain. Probably any disadvantages ?

https://dzone.com/articles/all-you-need-to-know-about-dependency-injection Dependency Injection (DI) is a design pattern used to implement IoC. It allows the creation of dependent objects outside of a class and provides those objects to a class through different ways. Using DI, we move the creation and binding of the dependent objects outside of the class that depends on them. Pros Reduction in dependencies. Highly reusable code. Improved testability of codes. Higher readability of code. Single responsibility principle can be applied. Cons It increases complexity, usually when the number of classes is increasing because of the single responsibility principle, which is not always beneficial. Codes are coupled to the Dependency Injection framework. Runtime type resolving (slightly) affects the performance.

What is a generators.

https://learn.javascript.ru/generator Генераторы - новый вид функций в современном JavaScript. Они отличаются от обычных тем, что могут приостанавливать своё выполнение, возвращать промежуточный результат и далее возобновлять его позже, в произвольный момент времени. function* generateSequence() { yield 1; yield 2; return 3; } При создании генератора код находится в начале своего выполнения. Основным методом генератора является next(). При вызове он возобновляет выполнение кода до ближайшего ключевого слова yield. По достижении yield выполнение приостанавливается, а значение - возвращается во внешний код: Одна из основных областей применения генераторов - написание «плоского» асинхронного кода.

How often do you refactor code? What are the reasons which force you to refactor code?

https://rubygarage.org/blog/when-to-refactor-code #1 You make significant upgrades or are faced with legacy code The development of a project shouldn't stop when it's launched. With more users, code gets slower and new bugs appear, requiring fixes. Besides, over time, you may want to add new functionality, use a new technology, or replace outdated libraries. But what should you do if your project's codebase is bulky? What if you have to deal with legacy code? The answer is that you should refactor it. Follow the advice below to get the best results when refactoring legacy code: Don't start refactoring right away. When you inherit legacy code, you or your team may (and most likely will) think the code is ugly. However, if it does the job, it's not that bad. Don't dive into refactoring and trying to fix all the weaknesses until you get acquainted with the code. There's a chance it has dependencies you're unaware of. Start small. Don't refactor the whole codebase at once. It will take too long and your team will get stuck in refactoring process without a chance to do any other work. It's better to plan small changes in every sprint. This will allow your team to improve the code and develop it at the same time. Follow the Red-Green-Refactor principle when adding new functionality. Agile software development methodology calls for adding new functionality using the Red-Green-Refactor principle. Red means create tests, Green means write code to pass those tests. After red and green phases, developers can refactor the code, making it laconic and clean. Always leave the code behind in a better state than you found it. Uncle Bob Martin #2 There are lots of bugs Fixing bugs without refactoring can lead to even more bugs, plus hours of tedious work. Fixing one or a few bugs in your codebase may not require refactoring at all. However, a codebase with a lot of bugs (legacy code, for instance) can be what's called spaghetti code. You fix one thing and another crashes. Make sure the code you're going to debug doesn't have hidden dependencies or repetitions and is easy to read. If it does and/or it's hard to read, refactor it and then debug. #3 You need to make code robust to changes It's definitely time to refactor your codebase when you add new features and bugs appear in parts that weren't changed and functioned perfectly before. It means your code is flaky. To avoid this situation and create a codebase that's resilient to changes, developers should follow principles of test-driven development and behavior-driven development as well as the SOLID and DRY principles. If the codebase already exists, you can improve it with refactoring. If you want to release products in less time and with lower budgets, then learn how Lean Approach in web development can help you with that! #4 There's repetitive code Repeated code is a common problem when several developers are working on different parts of the same project. Code gets repeated when developers simply don't know that someone else has already written code they could reuse. Such duplications lead to cases when a bug is fixed in one place but not in all other places. Fixing bugs in such code can become a nightmare, especially when a developer doesn't understand which version is correct. Duplications also make code clumsy and slow. The Don't Repeat Yourself (DRY) principle, commonly used in Agile software development methodologies, aims at making all elements independent so a change to any single element of a system won't require a change to other logically unrelated elements. According to the DRY principle, code refactoring is the main cure for duplicated code. Refactoring helps to find repetitions and make code more laconic. Martin Fowler describes the rule of three, which explains when to refactor, in his book Refactoring. He says that the first time developers do something, they should do it straightforwardly. The next time, if they do something similar, they can duplicate the existing piece of code. The third time, they should refactor. #5 Code is hard to read The main aim of refactoring is to improve the readability of code, making it more efficient and maintainable. In many cases, it's not even necessary to restructure code to do that. A developer can just rename a few functions or variables using more straightforward names and that will be enough to make the code more readable. Any fool can write code that a computer can understand. Good programmers write code that humans can understand. Martin Fowler #6 There's technical debt Technical debt is often compared with monetary debt. When you don't repay it, the interest compounds. Even if you don't develop your software actively, the debt still gets bigger. Why? Because developers who worked on the project leave it, and those small sacrifices to code quality that they made to meet business deadlines or enter the market faster will show up in their full strength. Technical debt causes consequences on different levels. Here are just some of them: New features take a long time to implement Broken deadlines and budgets Inaccurate estimates Vendor lock-in, when it's almost impossible to change a software development company There's no way to avoid technical debt. Nevertheless, you can take some measures to minimize it. The Agile methodology calls for constant refactoring during development as the main weapon against technical debt. However, most managers and developers rarely get a chance to start a project from scratch. With an existing project, what they should do is schedule regular refactoring tasks for every sprint and reduce technical debt until the code is clean and can be easily read by a new team of developers. Want to know how to provide the best quality for your RoR app? Learn how code audit can make your code much better Wrapping up Refactoring is one of the most effective tools to keep your code quality high. That's why you should make it a part of the software development routine and stop considering refactoring as an optional step to be taken from time to time. The word 'refactoring' should never appear in a schedule. Refactoring is not a story or a backlog item. Refactoring is not a scheduled task. Refactoring is immediate and continuous. It's like washing your hands in the bathroom. You always do it. Uncle Bob Martin When not to refactor Refactoring has to be an essential part of the development process. However, there are some cases when refactoring is a waste of time and money. Let's find out when you should consider some other options apart from refactoring. You don't have enough code coverage Code refactoring can cause more harm than good if you don't have all the necessary tests in place. When dealing with legacy code that doesn't have proper test coverage, you can't be sure the code you're refactoring works correctly. If you can't check how your changes influence the code, it's better to postpone refactoring. Otherwise, you can easily mess something up. The Agile methodology, which helps developers create high-quality software fast, also promotes test-driven development (TDD). This development approach teaches developers to write automation tests before the code itself and requires refactoring after each iteration. Using a test-driven development approach, you'll have tests to run after each little step of the refactoring process. It's not clear where to move When you have to deal with legacy code that doesn't look very pretty, you understand that it needs to be improved. However, it's not always obvious where to start and what to change to improve it. Without a clear plan for refactoring, any attempts to change anything can lead to new bugs and complications. In such a case, it's better to leave the code as is (if it does the job) and try to get to know it better. After a while, when all the dependencies show up and you know the code inside and out, you'll understand what parts of the code you need to refactor and how to do so successfully. Still can't figure out how code review can help your app quality? Find out more about code review and why it's so important for developers Module needs to be revamped completely Sometimes, it's much easier to rewrite a module in an app from scratch than to try to refactor and save it. Here are the cases when you can seriously consider this possibility: Your development team can barely understand the code Debugging is becoming more and more challenging More time is spent on fixing bugs than on implementing new features Any changes send ripple effects through the module Code is too messy and is difficult to maintain You know there's a technology or framework that can dramatically reduce the amount of code in the module If even one of the cases mentioned above describes the situation you're in now, it's probably time to retire the old code. If you've decided to rewrite a module in your app, well-written tests can make your life significantly easier. With the proper tests, it's relatively straightforward to create a new, more easily maintainable module. If you don't have tests, start with writing a detailed specification and only then proceed to code. You need to launch fast If you want to enter the market as soon as possible (and there are a number of reasons why you may want to), you can ignore refactoring. For instance, say you want to launch a minimum viable product (MVP) as a proof of concept or to validate a business idea and show it to your investors. In such a case, bugs are irrelevant: they can be fixed later, or you can rebuild the MVP into a full-fledged system by applying TDD and Agile principles. Another possible reason is a desire to get ahead of competitors and hit the market faster. Refactoring will only slow you down. You can sacrifice it to win the audience. The bottom line Code refactoring is an inherent part of the modern software development process. It should be considered a separate process only when you inherit an old codebase that's difficult to read and maintain. In all other cases, refactoring should be part of the routine. Don't forget to subscribe to our blog, and feel free to ask us any questions below!

MV* patterns

https://webcache.googleusercontent.com/search?q=cache:Vwh-YjHwFl4J:https://www.slideshare.net/RaduIscu/mv-patterns+&cd=6&hl=en&ct=clnk&gl=kz Design patterns are reusable solutions to commonly occurring problems in software design. MVC (Model-View-Controller) The MVC is a way to build frameworks for web applications, by providing a separation of concerns. Different files each have a specific responsibility. The pattern was originally designed by Trygve Reenskaug during his time working on Smalltalk-80 (1979) where it was initially called Model-View-Controller-Editor. My teacher explained the MVC architectural design pattern with a restaurant analogy 🍽: The cooks (Model) make the food, from the orders they receive from the waiter, placed by the customer. The customers receive plates of food (View). The waiters (Controller) go between the kitchen and the restaurant. They take orders from the customer to the kitchen (from View to Model). JavaScript has many frameworks that use the MVC pattern — Backbone, Ember.js and AngularJS to name a few. Model: the logic of a web application, where data is changed and/or saved. A model may have various views observing it's data updates. Persistence allows us to edit and update models with the knowledge that its most recent state will be saved in either: memory, in a user's localStorage data-store or synchronized with a database. View: the user-facing part, where the user interacts with the application. In JavaScript that means building the DOM element. They get updated when the model changes their state. The responsibility of actually updating the model goes to the Controller, the view is only a visual representation of models. It's recommended to use templating solutions — Handlebars.js and Mustache — so the views are dynamically loaded when needed, as opposed to manually creating all your HTML elements in-memory using string concatenation. Controller: the bridge between the models and the views. It transmits data from the browser to the application and vice-versa. This part in JavaScript frameworks is where it does not follow the classical MVC pattern, often referred as an MV* pattern. Backbone contains models and views, however it doesn't actually have true controllers. Its views and routers act a little similar to a controller, but neither are actually controllers on their own. MVP (Model-View-Presenter) It also has the separation of concerns across components like the MVC pattern, but there are key differences. Osmani refer's to this pattern as passive architecture. It's best used for web applications that have complex views with a lots of user interaction, so all logic is decoupled in a presenter. Presenter: component that holds the user-interface business logic. It receives the user requests and sends data back so the View can be updated accordingly when models change. It's the Presenter's responsibility to set the data. MVVM (Model-View-ViewModel) This design pattern clearly isolates UI development from business logic in a web application. It was created by Microsoft. UI developers write bindings to the ViewModel within their document markup (HTML), where the Model and ViewModel are maintained by developers working on the logic for the application. JavaScript frameworks such as KnockoutJS, Kendo MVVM and Knockback.js use this pattern. ViewModel: passes data from View to Model, acts like a Controller. Views and ViewModels talk to each other by two-way data-binding and events. Not required to reference a View because of data-binding. The ViewModel doesn't just expose Model attributes but also access to other methods and features such as validation. Osmani notes that this pattern abstracts the View and decreases the business logic in the code, and facilitates unit testing comparing to event driven code. But data-binding is not recommended for simpler UIs.

CSS methodologies, such as BEM or Atomic. compare

https://www.webfx.com/blog/web-design/css-methodologies/ CSS can be written in such a free and easy way that programmers develop their own preferences, habits and writing styles. How can we get everyone back on the same CSS page? When we're working a small project involving a small team issues in coding practices don't really come into play. But when we are working on a really large project, following a methodology means that all developers are on the same page and the website CSS is easier to read, write and maintain which can lead to savings in project hours and page file size. Which are the most used CSS Methodologies? BEM - Blocks, Elements and Modifiers This is a popular naming convention used by developers to better describe the relationship between elements and it's simple to put into practice. Blocks - these are known as parents, wrappers or standalone blocks e.g. navigation, header, signup-form, button. Elements - these are child elements of the block e.g. navigation link, signup-form label, button arrow. Modifiers - these are ways of modifying either a block or element from its original state e.g. active, disabled, error, confirm. What's great about BEM is that it tells us the relationship of classes to other classes just from the naming convention. For example .button--error is the error class for a regular button (indicated by double dash) whereas .button__arrow is the arrow that's a child element or within the button tag. The drawback with using BEM is that you end up using larger names to describe classes, meaning that your HTML and CSS files can get cluttered and overgrown. Atomic This involves writing simple reusable classes e.g. .centered { text-align:centered; } and then applying the reusable class in the HTML rather than writing text-align:center in each class you want it. It can also be applied to your SASS filing structure so that reusable parts are seperated into smaller scss files that can be copied and pasted across projects. SMACSS SMACSS looks at the categorisation of styles in 5 categories: Base, Layout, Module, State and Theme. Base is the default styles across your site e.g. reset styles like body { margin:0; padding:0; } or a { color:blue } Layout styles divide the page into groups and sections, most likely, containing modules. Modules reusable sections and strips we use in templates such as social media feeds and calls to action at the end of a page. State styles describe the state of a module or layout e.g. active, disabled, error, confirmed. Themes are less common but help us describe overall differences in layouts and modules e.g. christmas theme, shopping cart header, etc. Object Oriented CSS (OOCSS) This involves extending classes and not using location dependent styles. For instance, if you have a background style, set it up as a SASS mixin and then extend it each time rather than adding it multiple times. It will mean that you maintain it at its source (the mixin) rather than having to amend each instance. It also means specifying elements direct so instead of saying .nav li a li a { ... } put a class of .list-item on the element and call it direct. So which should I use? Each technique has similar concepts that are executed in slightly different ways. BEM seems to be the most popular at the time of writing but each deals with tackling the same issues; easier-to-maintain code. We recommend choosing the right methodology for the development team as a whole. Meaning our websites are created faster and more efficiently, and are easy to maintain and add to in the future.

Kiss, dry, yagni, solid

Что такое DRY, DIE, KISS, SOLID, YAGNI в программировании Итак, что же такое термины DRY, DIE, KISS, SOLID, YAGNI и в чем заключаются эти подходы в программировании - рассмотрим их по порядку. DRY - расшифровывается как Don't Repeat Youself - не повторяйся, также известен как DIE - Duplication Is Evil - дублирование это зло. Этот принцип заключается в том, что нужно избегать повторений одного и того же кода. Лучше использовать универсальные свойства и функции. KISS - Keep It Simple, Stupid - не усложняй! Смысл этого принципа программирования заключается в том, что стоит делать максимально простую и понятную архитектуру, применять шаблоны проектирования и не изобретать велосипед. Принцип SOLID в упрощенном варианте означает, что когда при написании кода используется несколько принципов вместе, то это значительно облегчает дальнейшую поддержку и развитие программы. Полностью акроним расшифровывается так: Single responsibility principle - принцип единственной обязанности (на каждый класс должна быть возложена одна-единственная обязанность); Open/closed principle - принцип открытости/закрытости (программные сущности должны быть закрыты для изменения но открыты для расширения); Liskov substitution principle - принцип подстановки Барбары Лисков (функции, которые используют базовый тип, должны иметь возможность использовать подтипы базового типа, не зная об этом. Подклассы не могут замещать поведения базовых классов. Подтипы должны дополнять базовые типы); Interface segregation principle - принцип разделения интерфейса (много специализированных интерфейсов лучше, чем один универсальный); Dependency inversion principle - принцип инверсии зависимостей (зависимости внутри системы строятся на основе абстракций. Модули верхнего уровня не зависят от модулей нижнего уровня. Абстракции не должны зависеть от деталей. Детали должны зависеть от абстракций); Термин YAGNI значит You Ain't Gonna Need It - вам это не понадобится! Его суть в том, чтобы реализовать только поставленные задачи и отказаться от избыточного функционала.


Conjuntos de estudio relacionados

3/25 Budgeting and Managing Resources

View Set

Chapter 15 Operating System Basics

View Set

MedTerm 10.02 (Mental Health Specialties)

View Set

10. GPCR-adenylyl cyclase-cAMP-protein kinase A (pathway)

View Set

Chapter 16: Health Assessment and Physical Examination

View Set