ICS 33 Final

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

How does a class support determining it's truthiness?

1. If the object has a __bool__(self) method, its result is used to determine the object's truthiness. 2. Otherwise, if the object has a __len__(self) method, its result is used instead — with zero being falsy and anything non-zero being truthy. 3. Otherwise, the object is always considered to be truthy.

How does a class support being iterable?

1. If the object has an __iter__(self) and __next__(self) method, it's called, and its result is the iterator that will be used to manage the iteration. 2. Otherwise, if the object has __len__ and __getitem__ methods, an iterator that runs the equivalent of the while loop below is executed instead, except that the values are returned to us individually instead of printed in the Python shell. (One way to implement that ourselves would be with a generator function, though Python handles the details internally for us.) >>> while index < len(s): ......... print(s[index]) ......... index += 1 3.Otherwise, the object is considered not to be iterable.

How does Python handle attribute lookups?

1. When an attribute is accessed on an object, Python first checks whether the object has a __getattribute__ dunder and calls that if there is. 2. If the object doesn't have __getattribute__, Python checks the object's dictionary. If there's a key matching the attribute's name, return the corresponding value. 3. If the object's dictionary has no such key, Python tries looking in the dictionary belonging to the class instead. If there's a key matching the attribute's name, return the corresponding value. 3a. If the class contains a descriptor, that descriptor's __get__ method is called, passing in the object instance and the class of the object. The __get__ method is responsible for returning the value of the attribute. 4. If the attribute is not found in the class dictionary, Python attempts to call the object's __getattr__ to create "phantom attributes." 5. AttributeError is raised

what's needed for Generators?

A generator in Python is a function that returns a sequence of results, rather than returning a single result. Interestingly, generators don't return all of their results at once; they instead return one result at a time. Even more interestingly, generators don't calculate all of their results at once, either, which means they don't run to completion when you call them, but instead run in fits and starts, doing just enough work to calculate their next result each time they're asked for one. Generator requires: - yield - yield from Within a generator function, the yield from statement yields every element from an iterable, one at a time, as though you had written a for loop to iterate it and used yield to generate each value separately. Ending a generator: - End of function leads to raising StopIteration - return leads to raising StopIteration with message of whatever is returned - raise StopIteration leads to raising StopIteration Note: Generators ARE iterators, but iterators are not always generators. Generator comprehensions are as simple as: >>> (x * x for x in range(3)) ...... <generator object <genexpr> at 0x000002CEA10F4A50> Generator comprehensions return generators, just like generator functions do.

How to make a context manager?

After the context expression of a with statement is evaluated, its result becomes a context manager, at which point its __enter__ method is called. When the with statement is exited, its __exit__ method is called. __enter__(self): Whatever it returns is the value that would be stored in the context variable x if the top line of our with statement ends with as x. (More often than not, __enter__ returns self, but it's not required to.) __exit__(self, exc_type, exc_value, exc_traceback): If the exit was normal (i.e., because we left the scope of the with statement without an exception being raised), exc_type, exc_value, and exc_traceback will each have the value None. If the exit was because an exception was raised, exc_type will specify the type of the exception, exc_value will specify its error message, and exc_traceback will contain its traceback; in that case, returning True from __exit__ will cause Python to suppress the exception so that it does not propagate any further.

Slicing

Allows us to take a sequence of objects and obtain a subsequence, containing some of the objects while skipping others. The slice will usually be the same type — so, for example, a slice of a list will be a list, a slice of a string will be a string, and so on. object[start(inclusive):stop(exclusive):step]

dunders needed for Iterables and Iterators

An iterable is an object that can produce a sequence of values, one at a time. An iterator is an object that manages the process of producing a sequence of values. When we want the objects of a class to be iterable, we need the class to support the iterable protocol. Like the other protocols we've seen, that means we need to write the dunder methods it requires, and that those methods need to work in a way that's compatible with the protocol. The iterable protocol looks deceptively simple, because it only requires one method. __iter__(self), which returns an iterator that is capable of producing one element at a time. Iterators, too, are built around a protocol: the iterator protocol, which consists of two methods. __next__(self), which returns the next element from the iterator, or raises a StopIteration if there are no more elements. __iter__(self), which returns the iterator itself. (This allows iterators to be used wherever iterables can be used, so you can, for example, pass an iterator to the list constructor or use one to drive a for loop. In other words, iterators are iterable, even if iterables are not iterators.)

How does a class support being hashable?

An object in Python is hashable if supports a protocol that provides two operations: A method __hash__(self) that determines the object's hash, with the expectation that an object will always have the same hash. (This is how a set decides where its objects belong.) A method __eq__(self, other) that determines whether the object is equivalent to some other object. (This is how a set decides whether an object is a duplicate of another.) How does the hash function know whether an object is hashable? As you likely expect, there is a dunder method called __hash__ that calculates an object's hash. Hashable objects are the ones that have a __hash__ method; unhashable objects are the ones that don't. The job of the __hash__ method is to combine the information in the object together into a single integer, taking all of that information into account, so that objects that are different in some way will be likely to hash differently. A simple but effective way to do that is to create a tuple containing all of the object's attributes, then pass those to the built-in hash function. PyCharm example: def __hash__(self): return hash((self._start, self._stop, self._step)) Note: Classes that provide a __hash__ method should also provide an __eq__ method, because there are two relationships that need to be maintained between the meanings of equivalence and hashing. - If two objects are equivalent, they must have the same hash. - If two objects have different hashes, they must not be equivalent. Notably missing from that list is the implication that two objects having the same hash are known to be equivalent. This won't necessarily be true, because part of what we do when we hash an object is simplify it to a value that's lower-fidelity; some information will almost surely be lost, which means inequivalent objects will unavoidably have the same hash sometimes. While we'd like to make that as unlikely as possible, we can't avoid it in general.

Creating a table in SQL

CREATE TABLE person(​ person_id INTEGER PRIMARY KEY,​ name TEXT,​ age INTEGER);​

Storing data in a table SQL

INSERT INTO person (person_id, name, age) VALUES (1, 'Boo', 13);​ To avoid SQL attacks: connection.execute( 'INSERT INTO person (name, age) VALUES (?, ?);', ('Boo', 13))

How to make generator?

If the function has a 'yield' or 'yield from' call.

If a class supports indexing, how can it support assigning into an index and deletion of an index?

If we want to support assigning into an index and deletion of an index, there are additional dunder methods we can add alongside __getitem__(self, index). __setitem__(self, index, value), which assigns the specified value into the specified index. __delitem__(self, index), which deletes the value at the specified index.

If a class supports indexing and slicing, how can it support assigning into a slice and deletion of a slice?

It's also possible to assign to a slice of an object, as well as delete a slice. Implementing support for those operations requires similar modifications to __setitem__ and __delitem__, whose index parameter will be a slice object in these situations.

How does a class support being sliceable?

Make the __getitem__(self, index) method handle slice objects PyCharm example: >>> bruh = slice(1, 17, 6) >>> bruh.start 1 >>> bruh.stop 17 >>> bruh.step 6 >>> start, stop, step = bruh.indices(10) >>> start, stop, step (1, 10, 6) .indices(len) A slice object contains three values, which default to None when not explicitly specified. Pycharm example: >>> defaulted = slice(None, None, None) >>> dstart, dstop, dstep = defaulted.indices(10) >>> dstart, dstop, dstep (0, 10, 1)

How does a class support equality?

Objects support equality (albeit using only their identities) by default. Otherwise, using the __eq__(self, other) dunder supports equality checks.

Why would you want to avoid using mutable objects as default arugments?

Of course, mutating a default argument is almost always going to be a mistake, so our best bet is not to use mutable objects as default arguments. If a default argument is mutable, we'll have to exercise caution with it — we'll have to be sure we never mutate it, never return a reference to it that would allow another part of the program to mutate it, and so on. That's a tall order, so the problem is best avoided. PyCharm example: >>> add_to_end('Hello') ['Hello'] # The default argument [] was used here. >>> add_to_end('there') ['Hello', 'there'] # If the default argument is [], where did 'Hello' come from? >>> add_to_end.__defaults__ (['Hello', 'there'],) # The defaults are stored within the function. # If you mutate them, they change. >>> add_to_end('Boo') ['Hello', 'there', 'Boo'] >>> add_to_end.__defaults__ (['Hello', 'there', 'Boo'],)

Types of arguments

Python draws a distinction between two kinds of arguments: Positional arguments, which are matched to their corresponding parameters based only on the order in which they're specified in the call. Keyword arguments, which are matched to their corresponding parameters based on how the keywords compare to the parameters' names. The positional arguments must be listed first when calling a function, mainly because any other rule would be unnecessarily confusing. Keyword arguments, on the other hand, can be more flexible without introducing confusion, since their names make clear how they correspond to the function's parameters.

Accessing data in SQL

SELECT column1, column2, column3 FROM table WHERE column2 BETWEEN 1 AND 10 ORDER BY length(column1) DESC; returns tuples with 3 values corresponding to each column.

Ask for all tables created in a SQL database

SELECT name FROM sqlite_schema;

How does a class support being reverse iterable (reversible)?

Sequences can be reversible but there's also a dunder: The __reversed__(self) method would provide reverse iteration. It returns a reverse iterator (i.e., an iterator that produces the values in reverse order).

How does a class support determining whether it holds a specified item?

Sequences can use the 'in' operator, but there's also a dunder: The __contains__(self, value) method would determine whether a value is part of the sequence, returning True if so or False otherwise.

Positional-only parameters and keyword-only parameters

The special notation * can be used in a parameter list to indicate that you're switching from parameters that can be matched positionally (or with keywords) to parameters that can only be matched via keyword. The * in the parameter list is not actually a parameter; it's simply a way to tell Python that all subsequent parameters must be passed via keyword. It's very common for parameters that follow a * to have default values, though this is not strictly a requirement. Example: def func(a, b, *, c = None) a and b are either positional or keyword c is a keyword-only parameter If we list a / among a function's parameters, it indicates that a transition from positional-only parameters and those that might be filled in some other way. To the left of the /, all parameters become positional-only; to the right of the /, they might be positional or keyword. Example: def func(a, b, /, c, d, e) a and b are positional-only parameters c, d, and e are either positional or keyword parameters. The two features can be combined, as well, as long as the order is respected; the / must precede the *. def func(a, b, /, *, c, d, e) a and b are positional-only parameters c, d, and e are keyword-only parameters

How does a class support arithmetic operators (+, -, *, **, /, //, %)?

We don't expect the arithmetic operators to modify their operands, so we need to be sure to return new values instead of modifying the existing ones. This aligns the behavior of our classes with the types that are built into Python. __pos__(self) implements the unary plus operator (+) __neg__(self) implements the unary minus operator (-) This is for returning positive and negative versions of the object. Normal arithmetic operator dunders (self ? other): __add__(self, other) returns the sum of self and other. __sub__(self, other) returns the difference when subtracting other from self. __mul__(self, other) returns the product of self and other. __truediv__(self, other) returns the quotient of self and other, without taking the floor of the result. __floordiv__(self, other) returns the floor of the quotient of self and other. __pow__(self, other) returns the result of raising self to the power other. Reflected arithmetic operator dunders (other ? self): __radd__(self, other) returns the sum of other and self. __rsub__(self, other) returns the difference when subtracting self from other. __rmul__(self, other) returns the product of other and self. __rtruediv__(self, other) returns the quotient of other and self, without taking the floor of the result. __rfloordir__(self, other) returns the floor of the quotient of other and self. __rpow__(self, other) returns the result of raising other to the power of self. Augmented arithmetic operator dunders (self ?= other): __iadd__(self, other), which adds other to self in-place (i.e., modifying self). __isub__(self, other), which subtracts other from self in-place. __imul__(self, other), which multiplies self by other in-place. __itruediv__(self, other), which divides self by other in-place, without taking the floor of the result. __ifloordiv__(self, other)self by other in-place. __ipow__(self, other), which raises self to the power other in-place. When we redefine arithmetic operators, the usual rule is that the operation x ? y turns into a call to the equivalent dunder method, with x being the first argument (i.e., passed into self) and y being the second (i.e., passed into other). So, for example, when evaluating x + y, Python attempts to call x.__add__(y), the result of which becomes the result of the addition. However, when x.__add__(y) is not supported, Python makes one more attempt to add x and y, by instead calling into a reflected version of the operator instead. In the case of addition, that reflected operation is implemented by the dunder method __radd__, so if x.__add__(y) is unsupported, Python attempts to call x.__radd__(y) instead. The mechanism it uses to determine whether x.__add__(y) is supported is simple: If the __add__ method doesn't exist, or if it returns NotImplemented, it's unsupported. When calling x += y, it attempts to call x.__iadd__(y). If that's not implemented, it will call x = x + y, which is like calling x = x.__add__(y). Big difference here in time O notation: __add__ builds a whole new object (that is the addition of the lefthand object and the righthand object) and returns it, while __iadd__ only adds the righthand object to the lefthand object, modifying the lefthand object. values + additional must leave values intact, meaning that it must build an entirely new list. This means all of the elements of values need to be copied into that new list, followed by all of the ones that we're adding to the end of it. If there are n elements in values and m elements in additional, we'll spend O(n + m) time on this operation (i.e., linear, but proportional to the sum of the lengths of the two lists). values += additional appends each value in additional to the end of values directly. This means it doesn't matter how many elements are in values, because none of them will need to be relocated; it only matters how many elements are in additional. If there are n elements in values and m elements in additional, we'll spend O(m) time on this operation. It's still linear, but it's linear with respect to the lengths of one list, rather than the sum of the lengths of both. This difference can be quite large in practice, as it's not uncommon to add a small number of elements to a large list.

When is an object a sequence?

When a class has both a __len__ method and a __getitem__ method that accepts non-negative indices, an interesting thing happens: Even without an __iter__ method, its objects become iterable automatically. This is because __len__ and __getitem__ combine together into something called the sequence protocol, which means that objects supporting that combination of methods are what we call sequences. If we know that an object is a sequence, we know that it can be iterated without an __iter__ method, via calls to __getitem__ and __len__.

How to make a sequence?

When a class has both a __len__ method and a __getitem__ method that accepts non-negative indices, an interesting thing happens: Even without an __iter__ method, its objects become iterable automatically. This is because __len__ and __getitem__ combine together into something called the sequence protocol, which means that objects supporting that combination of methods are what we call sequences. If we know that an object is a sequence, we know that it can be iterated without an __iter__ method, via calls to __getitem__ and __len__.

How to make an iterable?

When we begin iterating an object that's iterable, its __iter__ method is called, which returns an iterator. That iterator, in turn, provides a __next__ method that is called to produce each value in the iteration, one at a time. __iter__(self): return self __next__(self): result = self._next self._next += self._myrange.step() return result

How does a class support indexing?

When we want objects to support indexing, we add at least one dunder method to their class. __getitem__(self, index), which returns the value associated with the specified index. Note that the word "index" does not necessarily mean a non-negative integer, or even an integer at all. It's up to the __getitem__ method to decide what constitutes a valid index and what an index means. (This is what makes it possible to index lists with integers, while being able to index dictionaries with arbitrary hashable keys. Their __getitem__ methods are written differently.) Since __getitem__ accepts a parameter other than self, but needs to perform calculations based on that parameter's value, some validation is necessary. For example, for custom class MyRange: non-integer indices and out-of-range indices raise exceptions with descriptive error messages instead of returning invalid answers.

LEGB

You may have seen before that Python resolves names within functions using a rule that is sometimes referred to as LEGB, which is an acronym standing for Local, Enclosing, Global, Built-in. When you specify an identifier in a Python function, this is how Python decides what you meant by it: L: If there's a local variable with that name, that's what you're referring to. E: Otherwise, if there's a variable in an enclosing scope with that name (e.g., when bar refers to one of foo's variables in the example above), that's what you're referring to. G: Otherwise, if the identifier is defined globally (i.e., in the currently-executing module), that's what you're referring to. B: Otherwise, if the identifier is one of Python's built-ins, such as list, str, or len, that's what you're referring to. Otherwise, an exception will be raised, since the identifier has no accessible definition. A natural consequence of the LEGB rule is that identifiers can shadow others, which is to say that you can define a local variable with the same name as a global variable in the same module. In the scope of that local variable, the local variable "wins" — though you might notice that a tool like globals() provides you with one possible workaround, albeit a heavy-handed one. In practice, your best bet is to limit the impact of this kind of shadowing wherever you can, by not attempting to rely on fancy techniques to work around it, but instead to respect the scopes introduced in your own designs. (This is one of many techniques to help a program make more sense to a human reader.) But it's handy to understand rules like these, because not understanding them can lead to not being able to understand one's own programs, especially as they change over time.

__delattr__ parameters

__delattr__(self, item): no return, call super().__delattr__(item). This will delete self.item.

__delete__ parameters

__delete__(self, instance):

dunders needed for Context Managers

__enter__(self) __exit__(self, exc_type, exc_value, exc_traceback)

Implementing equality and inequality in a class

__eq__(self, other) implements both equality and inequality checks (== and !=) __ne__(self, other) ONLY implements inequality, but not equality checks (only !=) So, generally, we can think of __eq__ as necessary when customizing equivalence and __ne__ only as a potential optimization. As a safety mechanism, when we write an __eq__ method in a class without a __hash__ method being written in that same class, Python automatically sets the value of __hash__ in the class dictionary to None, specifically to avoid the problem we otherwise would have created: Specifying a way for two objects to be inequivalent without having ensured that their hashes would be different.

dunders needed for Descriptors

__get__(self, instance, owner) __set__(self, instance, value) __delete__(self, instance) __set_name__(self, owner, name)

How to make a descriptor?

__get__(self, instance, owner) __set__(self, instance, value) __delete__(self, instance) __set_name__(self, owner, name) __get__, __set__, and __del__ methods will be called automatically whenever we attempt to access, modify, or delete descriptors.

__get__ parameters

__get__(self, instance, owner):

__getattr__ parameters

__getattr__(self, item): return whatever you want self.item to "represent" item wont end up in the object's __dict__ after this though, since it's just a phantom attribute.

__getattribute__ parameters

__getattribute__(self, item): return super().__getattribute__(item)

How does a class support determining it's length?

__len__(self)

dunders needed for Sequence

__len__(self) __getitem__(self, index)

__set__ parameters

__set__(self, instance, value):

__setattr__ parameters

__setattr__(self, key, value): no return, call super().__setattr__(key, value). This will set self.key to value.

How does a class support <, >, <=, >= operations?

an implementation of either __lt__(self, other) or __gt__(self, other) can be used for both the < and > operators, and an implementation of either __le__(self, other) or __ge__(self, other) can be used for both the <= and >= operators. Even though the presence of both __eq__ and __lt__ could theoretically be enough to implement a <= operator, Python doesn't implement that conversion for us. So, if we want relational comparisons, we probably want at least these three dunder methods: __eq__, __lt__, and __le__, with the other three (__ne__, __gt__, and __ge__) providing a mechanism for optimization if we need it.

Packing positional arguments and keyword arguments into a function

def func(*args, **kwargs): args is a tuple-packing parameter kwargs is a dictionary-packing parameter

Unpacking positional arguments and keyword arguments into a function

func(*args, **kwargs)

Modules and Namespaces

globals() returns the current module's __dict__ dir() returns the current module's __dict__'s keys in a list, in sorted order. dir() == sorted(globals().keys()) locals() returns a dictionary containing locals and enclosing (ONLY if its referenced within the function!) Previously, you've likely learned that Python draws a distinction between global variables and local variables, but that this distinction is only meaningful within a function. Local variables are those that are accessible only within a function, while the global variables are the ones that are accessible throughout the module where that function resides. This strongly suggests that perhaps globals() and locals() will behave differently when run within a function.

What is id()?

id() is a built-in function that returns the unique integer identifier for an object. This identifier is guaranteed to be unique and constant for the lifetime of the object, meaning that no two objects will have the same ID, even if they have the same value.

How to add and remove from a set?

set_object = set() set_object.add(x) set_object.remove(x)

When we write x == y, what is called?

x.__eq__(y) is called, unless x returns NotImplemented or x doesn't have __eq__, which leads to y.__eq__(x) being called. if y returns NotImplemented or y doesn't have an __eq__, then False is returned, because it resorts to comparing identification: id(x) == id(y).


Kaugnay na mga set ng pag-aaral

HSP 218 ServSafe Study Questions

View Set

All of the following was true about the Peloponnesian War except

View Set

AP Classroom (Chemistry Unit 3, Ch. 1,5,9,10: Intermolecular Forces and Properties)

View Set

Microeconomics Chapter 4 Section 3 (Lesson 6)

View Set

PSYC 315 - The Modern Unconscious - Midterm 3

View Set