AlkantarClanX12
Current Path : /opt/alt/python312/lib64/python3.12/pydoc_data/__pycache__/ |
Current File : //opt/alt/python312/lib64/python3.12/pydoc_data/__pycache__/topics.cpython-312.pyc |
� �Q�fW � �� � i d d�dd�dd�dd�dd �d d�dd �dd�dd�dd�dd�dd�dd�dd�dd�dd�d d!�i d"d#�d$d%�d&d'�d(d)�d*d+�d,d-�d.d/�d0d1�d2d3�d4d5�d6d7�d8d9�d:d;�d<d=�d>d?�d@dA�dBdC��i dDdE�dFdG�dHdI�dJdK�dLdM�dNdO�dPdQ�dRd=�dSdT�dUdV�dWdX�dYdZ�d[d\�d]d^�d_d`�dadb�dcdd��i dedf�dgdh�didj�dkdl�dmdn�dodp�dqdr�dsdt�dudv�dwdx�dydz�d{d|�d}d~�dd��d�d��d�d��d�d���d�d�d�d�d�d�d�d�d�d�d�d���Z y�)��assertau The "assert" statement ********************** Assert statements are a convenient way to insert debugging assertions into a program: assert_stmt ::= "assert" expression ["," expression] The simple form, "assert expression", is equivalent to if __debug__: if not expression: raise AssertionError The extended form, "assert expression1, expression2", is equivalent to if __debug__: if not expression1: raise AssertionError(expression2) These equivalences assume that "__debug__" and "AssertionError" refer to the built-in variables with those names. In the current implementation, the built-in variable "__debug__" is "True" under normal circumstances, "False" when optimization is requested (command line option "-O"). The current code generator emits no code for an assert statement when optimization is requested at compile time. Note that it is unnecessary to include the source code for the expression that failed in the error message; it will be displayed as part of the stack trace. Assignments to "__debug__" are illegal. The value for the built-in variable is determined when the interpreter starts. � assignmentuy, Assignment statements ********************* Assignment statements are used to (re)bind names to values and to modify attributes or items of mutable objects: assignment_stmt ::= (target_list "=")+ (starred_expression | yield_expression) target_list ::= target ("," target)* [","] target ::= identifier | "(" [target_list] ")" | "[" [target_list] "]" | attributeref | subscription | slicing | "*" target (See section Primaries for the syntax definitions for *attributeref*, *subscription*, and *slicing*.) An assignment statement evaluates the expression list (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and assigns the single resulting object to each of the target lists, from left to right. Assignment is defined recursively depending on the form of the target (list). When a target is part of a mutable object (an attribute reference, subscription or slicing), the mutable object must ultimately perform the assignment and decide about its validity, and may raise an exception if the assignment is unacceptable. The rules observed by various types and the exceptions raised are given with the definition of the object types (see section The standard type hierarchy). Assignment of an object to a target list, optionally enclosed in parentheses or square brackets, is recursively defined as follows. * If the target list is a single target with no trailing comma, optionally in parentheses, the object is assigned to that target. * Else: * If the target list contains one target prefixed with an asterisk, called a “starred” target: The object must be an iterable with at least as many items as there are targets in the target list, minus one. The first items of the iterable are assigned, from left to right, to the targets before the starred target. The final items of the iterable are assigned to the targets after the starred target. A list of the remaining items in the iterable is then assigned to the starred target (the list can be empty). * Else: The object must be an iterable with the same number of items as there are targets in the target list, and the items are assigned, from left to right, to the corresponding targets. Assignment of an object to a single target is recursively defined as follows. * If the target is an identifier (name): * If the name does not occur in a "global" or "nonlocal" statement in the current code block: the name is bound to the object in the current local namespace. * Otherwise: the name is bound to the object in the global namespace or the outer namespace determined by "nonlocal", respectively. The name is rebound if it was already bound. This may cause the reference count for the object previously bound to the name to reach zero, causing the object to be deallocated and its destructor (if it has one) to be called. * If the target is an attribute reference: The primary expression in the reference is evaluated. It should yield an object with assignable attributes; if this is not the case, "TypeError" is raised. That object is then asked to assign the assigned object to the given attribute; if it cannot perform the assignment, it raises an exception (usually but not necessarily "AttributeError"). Note: If the object is a class instance and the attribute reference occurs on both sides of the assignment operator, the right-hand side expression, "a.x" can access either an instance attribute or (if no instance attribute exists) a class attribute. The left-hand side target "a.x" is always set as an instance attribute, creating it if necessary. Thus, the two occurrences of "a.x" do not necessarily refer to the same attribute: if the right-hand side expression refers to a class attribute, the left-hand side creates a new instance attribute as the target of the assignment: class Cls: x = 3 # class variable inst = Cls() inst.x = inst.x + 1 # writes inst.x as 4 leaving Cls.x as 3 This description does not necessarily apply to descriptor attributes, such as properties created with "property()". * If the target is a subscription: The primary expression in the reference is evaluated. It should yield either a mutable sequence object (such as a list) or a mapping object (such as a dictionary). Next, the subscript expression is evaluated. If the primary is a mutable sequence object (such as a list), the subscript must yield an integer. If it is negative, the sequence’s length is added to it. The resulting value must be a nonnegative integer less than the sequence’s length, and the sequence is asked to assign the assigned object to its item with that index. If the index is out of range, "IndexError" is raised (assignment to a subscripted sequence cannot add new items to a list). If the primary is a mapping object (such as a dictionary), the subscript must have a type compatible with the mapping’s key type, and the mapping is then asked to create a key/value pair which maps the subscript to the assigned object. This can either replace an existing key/value pair with the same key value, or insert a new key/value pair (if no key with the same value existed). For user-defined objects, the "__setitem__()" method is called with appropriate arguments. * If the target is a slicing: The primary expression in the reference is evaluated. It should yield a mutable sequence object (such as a list). The assigned object should be a sequence object of the same type. Next, the lower and upper bound expressions are evaluated, insofar they are present; defaults are zero and the sequence’s length. The bounds should evaluate to integers. If either bound is negative, the sequence’s length is added to it. The resulting bounds are clipped to lie between zero and the sequence’s length, inclusive. Finally, the sequence object is asked to replace the slice with the items of the assigned sequence. The length of the slice may be different from the length of the assigned sequence, thus changing the length of the target sequence, if the target sequence allows it. **CPython implementation detail:** In the current implementation, the syntax for targets is taken to be the same as for expressions, and invalid syntax is rejected during the code generation phase, causing less detailed error messages. Although the definition of assignment implies that overlaps between the left-hand side and the right-hand side are ‘simultaneous’ (for example "a, b = b, a" swaps two variables), overlaps *within* the collection of assigned-to variables occur left-to-right, sometimes resulting in confusion. For instance, the following program prints "[0, 2]": x = [0, 1] i = 0 i, x[i] = 1, 2 # i is updated, then x[i] is updated print(x) See also: **PEP 3132** - Extended Iterable Unpacking The specification for the "*target" feature. Augmented assignment statements =============================== Augmented assignment is the combination, in a single statement, of a binary operation and an assignment statement: augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression) augtarget ::= identifier | attributeref | subscription | slicing augop ::= "+=" | "-=" | "*=" | "@=" | "/=" | "//=" | "%=" | "**=" | ">>=" | "<<=" | "&=" | "^=" | "|=" (See section Primaries for the syntax definitions of the last three symbols.) An augmented assignment evaluates the target (which, unlike normal assignment statements, cannot be an unpacking) and the expression list, performs the binary operation specific to the type of assignment on the two operands, and assigns the result to the original target. The target is only evaluated once. An augmented assignment statement like "x += 1" can be rewritten as "x = x + 1" to achieve a similar, but not exactly equal effect. In the augmented version, "x" is only evaluated once. Also, when possible, the actual operation is performed *in-place*, meaning that rather than creating a new object and assigning that to the target, the old object is modified instead. Unlike normal assignments, augmented assignments evaluate the left- hand side *before* evaluating the right-hand side. For example, "a[i] += f(x)" first looks-up "a[i]", then it evaluates "f(x)" and performs the addition, and lastly, it writes the result back to "a[i]". With the exception of assigning to tuples and multiple targets in a single statement, the assignment done by augmented assignment statements is handled the same way as normal assignments. Similarly, with the exception of the possible *in-place* behavior, the binary operation performed by augmented assignment is the same as the normal binary operations. For targets which are attribute references, the same caveat about class and instance attributes applies as for regular assignments. Annotated assignment statements =============================== *Annotation* assignment is the combination, in a single statement, of a variable or attribute annotation and an optional assignment statement: annotated_assignment_stmt ::= augtarget ":" expression ["=" (starred_expression | yield_expression)] The difference from normal Assignment statements is that only a single target is allowed. The assignment target is considered “simple” if it consists of a single name that is not enclosed in parentheses. For simple assignment targets, if in class or module scope, the annotations are evaluated and stored in a special class or module attribute "__annotations__" that is a dictionary mapping from variable names (mangled if private) to evaluated annotations. This attribute is writable and is automatically created at the start of class or module body execution, if annotations are found statically. If the assignment target is not simple (an attribute, subscript node, or parenthesized name), the annotation is evaluated if in class or module scope, but not stored. If a name is annotated in a function scope, then this name is local for that scope. Annotations are never evaluated and stored in function scopes. If the right hand side is present, an annotated assignment performs the actual assignment before evaluating annotations (where applicable). If the right hand side is not present for an expression target, then the interpreter evaluates the target except for the last "__setitem__()" or "__setattr__()" call. See also: **PEP 526** - Syntax for Variable Annotations The proposal that added syntax for annotating the types of variables (including class variables and instance variables), instead of expressing them through comments. **PEP 484** - Type hints The proposal that added the "typing" module to provide a standard syntax for type annotations that can be used in static analysis tools and IDEs. Changed in version 3.8: Now annotated assignments allow the same expressions in the right hand side as regular assignments. Previously, some expressions (like un-parenthesized tuple expressions) caused a syntax error. �asynca< Coroutines ********** Added in version 3.5. Coroutine function definition ============================= async_funcdef ::= [decorators] "async" "def" funcname "(" [parameter_list] ")" ["->" expression] ":" suite Execution of Python coroutines can be suspended and resumed at many points (see *coroutine*). "await" expressions, "async for" and "async with" can only be used in the body of a coroutine function. Functions defined with "async def" syntax are always coroutine functions, even if they do not contain "await" or "async" keywords. It is a "SyntaxError" to use a "yield from" expression inside the body of a coroutine function. An example of a coroutine function: async def func(param1, param2): do_stuff() await some_coroutine() Changed in version 3.7: "await" and "async" are now keywords; previously they were only treated as such inside the body of a coroutine function. The "async for" statement ========================= async_for_stmt ::= "async" for_stmt An *asynchronous iterable* provides an "__aiter__" method that directly returns an *asynchronous iterator*, which can call asynchronous code in its "__anext__" method. The "async for" statement allows convenient iteration over asynchronous iterables. The following code: async for TARGET in ITER: SUITE else: SUITE2 Is semantically equivalent to: iter = (ITER) iter = type(iter).__aiter__(iter) running = True while running: try: TARGET = await type(iter).__anext__(iter) except StopAsyncIteration: running = False else: SUITE else: SUITE2 See also "__aiter__()" and "__anext__()" for details. It is a "SyntaxError" to use an "async for" statement outside the body of a coroutine function. The "async with" statement ========================== async_with_stmt ::= "async" with_stmt An *asynchronous context manager* is a *context manager* that is able to suspend execution in its *enter* and *exit* methods. The following code: async with EXPRESSION as TARGET: SUITE is semantically equivalent to: manager = (EXPRESSION) aenter = type(manager).__aenter__ aexit = type(manager).__aexit__ value = await aenter(manager) hit_except = False try: TARGET = value SUITE except: hit_except = True if not await aexit(manager, *sys.exc_info()): raise finally: if not hit_except: await aexit(manager, None, None, None) See also "__aenter__()" and "__aexit__()" for details. It is a "SyntaxError" to use an "async with" statement outside the body of a coroutine function. See also: **PEP 492** - Coroutines with async and await syntax The proposal that made coroutines a proper standalone concept in Python, and added supporting syntax. zatom-identifiersa� Identifiers (Names) ******************* An identifier occurring as an atom is a name. See section Identifiers and keywords for lexical definition and section Naming and binding for documentation of naming and binding. When the name is bound to an object, evaluation of the atom yields that object. When a name is not bound, an attempt to evaluate it raises a "NameError" exception. Private name mangling ===================== When an identifier that textually occurs in a class definition begins with two or more underscore characters and does not end in two or more underscores, it is considered a *private name* of that class. See also: The class specifications. More precisely, private names are transformed to a longer form before code is generated for them. If the transformed name is longer than 255 characters, implementation-defined truncation may happen. The transformation is independent of the syntactical context in which the identifier is used but only the following private identifiers are mangled: * Any name used as the name of a variable that is assigned or read or any name of an attribute being accessed. The "__name__" attribute of nested functions, classes, and type aliases is however not mangled. * The name of imported modules, e.g., "__spam" in "import __spam". If the module is part of a package (i.e., its name contains a dot), the name is *not* mangled, e.g., the "__foo" in "import __foo.bar" is not mangled. * The name of an imported member, e.g., "__f" in "from spam import __f". The transformation rule is defined as follows: * The class name, with leading underscores removed and a single leading underscore inserted, is inserted in front of the identifier, e.g., the identifier "__spam" occurring in a class named "Foo", "_Foo" or "__Foo" is transformed to "_Foo__spam". * If the class name consists only of underscores, the transformation is the identity, e.g., the identifier "__spam" occurring in a class named "_" or "__" is left as is. z atom-literalsu Literals ******** Python supports string and bytes literals and various numeric literals: literal ::= stringliteral | bytesliteral | integer | floatnumber | imagnumber Evaluation of a literal yields an object of the given type (string, bytes, integer, floating-point number, complex number) with the given value. The value may be approximated in the case of floating-point and imaginary (complex) literals. See section Literals for details. All literals correspond to immutable data types, and hence the object’s identity is less important than its value. Multiple evaluations of literals with the same value (either the same occurrence in the program text or a different occurrence) may obtain the same object or a different object with the same value. zattribute-accessu�5 Customizing attribute access **************************** The following methods can be defined to customize the meaning of attribute access (use of, assignment to, or deletion of "x.name") for class instances. object.__getattr__(self, name) Called when the default attribute access fails with an "AttributeError" (either "__getattribute__()" raises an "AttributeError" because *name* is not an instance attribute or an attribute in the class tree for "self"; or "__get__()" of a *name* property raises "AttributeError"). This method should either return the (computed) attribute value or raise an "AttributeError" exception. Note that if the attribute is found through the normal mechanism, "__getattr__()" is not called. (This is an intentional asymmetry between "__getattr__()" and "__setattr__()".) This is done both for efficiency reasons and because otherwise "__getattr__()" would have no way to access other attributes of the instance. Note that at least for instance variables, you can fake total control by not inserting any values in the instance attribute dictionary (but instead inserting them in another object). See the "__getattribute__()" method below for a way to actually get total control over attribute access. object.__getattribute__(self, name) Called unconditionally to implement attribute accesses for instances of the class. If the class also defines "__getattr__()", the latter will not be called unless "__getattribute__()" either calls it explicitly or raises an "AttributeError". This method should return the (computed) attribute value or raise an "AttributeError" exception. In order to avoid infinite recursion in this method, its implementation should always call the base class method with the same name to access any attributes it needs, for example, "object.__getattribute__(self, name)". Note: This method may still be bypassed when looking up special methods as the result of implicit invocation via language syntax or built-in functions. See Special method lookup. For certain sensitive attribute accesses, raises an auditing event "object.__getattr__" with arguments "obj" and "name". object.__setattr__(self, name, value) Called when an attribute assignment is attempted. This is called instead of the normal mechanism (i.e. store the value in the instance dictionary). *name* is the attribute name, *value* is the value to be assigned to it. If "__setattr__()" wants to assign to an instance attribute, it should call the base class method with the same name, for example, "object.__setattr__(self, name, value)". For certain sensitive attribute assignments, raises an auditing event "object.__setattr__" with arguments "obj", "name", "value". object.__delattr__(self, name) Like "__setattr__()" but for attribute deletion instead of assignment. This should only be implemented if "del obj.name" is meaningful for the object. For certain sensitive attribute deletions, raises an auditing event "object.__delattr__" with arguments "obj" and "name". object.__dir__(self) Called when "dir()" is called on the object. An iterable must be returned. "dir()" converts the returned iterable to a list and sorts it. Customizing module attribute access =================================== Special names "__getattr__" and "__dir__" can be also used to customize access to module attributes. The "__getattr__" function at the module level should accept one argument which is the name of an attribute and return the computed value or raise an "AttributeError". If an attribute is not found on a module object through the normal lookup, i.e. "object.__getattribute__()", then "__getattr__" is searched in the module "__dict__" before raising an "AttributeError". If found, it is called with the attribute name and the result is returned. The "__dir__" function should accept no arguments, and return an iterable of strings that represents the names accessible on module. If present, this function overrides the standard "dir()" search on a module. For a more fine grained customization of the module behavior (setting attributes, properties, etc.), one can set the "__class__" attribute of a module object to a subclass of "types.ModuleType". For example: import sys from types import ModuleType class VerboseModule(ModuleType): def __repr__(self): return f'Verbose {self.__name__}' def __setattr__(self, attr, value): print(f'Setting {attr}...') super().__setattr__(attr, value) sys.modules[__name__].__class__ = VerboseModule Note: Defining module "__getattr__" and setting module "__class__" only affect lookups made using the attribute access syntax – directly accessing the module globals (whether by code within the module, or via a reference to the module’s globals dictionary) is unaffected. Changed in version 3.5: "__class__" module attribute is now writable. Added in version 3.7: "__getattr__" and "__dir__" module attributes. See also: **PEP 562** - Module __getattr__ and __dir__ Describes the "__getattr__" and "__dir__" functions on modules. Implementing Descriptors ======================== The following methods only apply when an instance of the class containing the method (a so-called *descriptor* class) appears in an *owner* class (the descriptor must be in either the owner’s class dictionary or in the class dictionary for one of its parents). In the examples below, “the attribute” refers to the attribute whose name is the key of the property in the owner class’ "__dict__". object.__get__(self, instance, owner=None) Called to get the attribute of the owner class (class attribute access) or of an instance of that class (instance attribute access). The optional *owner* argument is the owner class, while *instance* is the instance that the attribute was accessed through, or "None" when the attribute is accessed through the *owner*. This method should return the computed attribute value or raise an "AttributeError" exception. **PEP 252** specifies that "__get__()" is callable with one or two arguments. Python’s own built-in descriptors support this specification; however, it is likely that some third-party tools have descriptors that require both arguments. Python’s own "__getattribute__()" implementation always passes in both arguments whether they are required or not. object.__set__(self, instance, value) Called to set the attribute on an instance *instance* of the owner class to a new value, *value*. Note, adding "__set__()" or "__delete__()" changes the kind of descriptor to a “data descriptor”. See Invoking Descriptors for more details. object.__delete__(self, instance) Called to delete the attribute on an instance *instance* of the owner class. Instances of descriptors may also have the "__objclass__" attribute present: object.__objclass__ The attribute "__objclass__" is interpreted by the "inspect" module as specifying the class where this object was defined (setting this appropriately can assist in runtime introspection of dynamic class attributes). For callables, it may indicate that an instance of the given type (or a subclass) is expected or required as the first positional argument (for example, CPython sets this attribute for unbound methods that are implemented in C). Invoking Descriptors ==================== In general, a descriptor is an object attribute with “binding behavior”, one whose attribute access has been overridden by methods in the descriptor protocol: "__get__()", "__set__()", and "__delete__()". If any of those methods are defined for an object, it is said to be a descriptor. The default behavior for attribute access is to get, set, or delete the attribute from an object’s dictionary. For instance, "a.x" has a lookup chain starting with "a.__dict__['x']", then "type(a).__dict__['x']", and continuing through the base classes of "type(a)" excluding metaclasses. However, if the looked-up value is an object defining one of the descriptor methods, then Python may override the default behavior and invoke the descriptor method instead. Where this occurs in the precedence chain depends on which descriptor methods were defined and how they were called. The starting point for descriptor invocation is a binding, "a.x". How the arguments are assembled depends on "a": Direct Call The simplest and least common call is when user code directly invokes a descriptor method: "x.__get__(a)". Instance Binding If binding to an object instance, "a.x" is transformed into the call: "type(a).__dict__['x'].__get__(a, type(a))". Class Binding If binding to a class, "A.x" is transformed into the call: "A.__dict__['x'].__get__(None, A)". Super Binding A dotted lookup such as "super(A, a).x" searches "a.__class__.__mro__" for a base class "B" following "A" and then returns "B.__dict__['x'].__get__(a, A)". If not a descriptor, "x" is returned unchanged. For instance bindings, the precedence of descriptor invocation depends on which descriptor methods are defined. A descriptor can define any combination of "__get__()", "__set__()" and "__delete__()". If it does not define "__get__()", then accessing the attribute will return the descriptor object itself unless there is a value in the object’s instance dictionary. If the descriptor defines "__set__()" and/or "__delete__()", it is a data descriptor; if it defines neither, it is a non-data descriptor. Normally, data descriptors define both "__get__()" and "__set__()", while non-data descriptors have just the "__get__()" method. Data descriptors with "__get__()" and "__set__()" (and/or "__delete__()") defined always override a redefinition in an instance dictionary. In contrast, non-data descriptors can be overridden by instances. Python methods (including those decorated with "@staticmethod" and "@classmethod") are implemented as non-data descriptors. Accordingly, instances can redefine and override methods. This allows individual instances to acquire behaviors that differ from other instances of the same class. The "property()" function is implemented as a data descriptor. Accordingly, instances cannot override the behavior of a property. __slots__ ========= *__slots__* allow us to explicitly declare data members (like properties) and deny the creation of "__dict__" and *__weakref__* (unless explicitly declared in *__slots__* or available in a parent.) The space saved over using "__dict__" can be significant. Attribute lookup speed can be significantly improved as well. object.__slots__ This class variable can be assigned a string, iterable, or sequence of strings with variable names used by instances. *__slots__* reserves space for the declared variables and prevents the automatic creation of "__dict__" and *__weakref__* for each instance. Notes on using *__slots__*: * When inheriting from a class without *__slots__*, the "__dict__" and *__weakref__* attribute of the instances will always be accessible. * Without a "__dict__" variable, instances cannot be assigned new variables not listed in the *__slots__* definition. Attempts to assign to an unlisted variable name raises "AttributeError". If dynamic assignment of new variables is desired, then add "'__dict__'" to the sequence of strings in the *__slots__* declaration. * Without a *__weakref__* variable for each instance, classes defining *__slots__* do not support "weak references" to its instances. If weak reference support is needed, then add "'__weakref__'" to the sequence of strings in the *__slots__* declaration. * *__slots__* are implemented at the class level by creating descriptors for each variable name. As a result, class attributes cannot be used to set default values for instance variables defined by *__slots__*; otherwise, the class attribute would overwrite the descriptor assignment. * The action of a *__slots__* declaration is not limited to the class where it is defined. *__slots__* declared in parents are available in child classes. However, child subclasses will get a "__dict__" and *__weakref__* unless they also define *__slots__* (which should only contain names of any *additional* slots). * If a class defines a slot also defined in a base class, the instance variable defined by the base class slot is inaccessible (except by retrieving its descriptor directly from the base class). This renders the meaning of the program undefined. In the future, a check may be added to prevent this. * "TypeError" will be raised if nonempty *__slots__* are defined for a class derived from a ""variable-length" built-in type" such as "int", "bytes", and "tuple". * Any non-string *iterable* may be assigned to *__slots__*. * If a "dictionary" is used to assign *__slots__*, the dictionary keys will be used as the slot names. The values of the dictionary can be used to provide per-attribute docstrings that will be recognised by "inspect.getdoc()" and displayed in the output of "help()". * "__class__" assignment works only if both classes have the same *__slots__*. * Multiple inheritance with multiple slotted parent classes can be used, but only one parent is allowed to have attributes created by slots (the other bases must have empty slot layouts) - violations raise "TypeError". * If an *iterator* is used for *__slots__* then a *descriptor* is created for each of the iterator’s values. However, the *__slots__* attribute will be an empty iterator. zattribute-referencesaU Attribute references ******************** An attribute reference is a primary followed by a period and a name: attributeref ::= primary "." identifier The primary must evaluate to an object of a type that supports attribute references, which most objects do. This object is then asked to produce the attribute whose name is the identifier. The type and value produced is determined by the object. Multiple evaluations of the same attribute reference may yield different objects. This production can be customized by overriding the "__getattribute__()" method or the "__getattr__()" method. The "__getattribute__()" method is called first and either returns a value or raises "AttributeError" if the attribute is not available. If an "AttributeError" is raised and the object has a "__getattr__()" method, that method is called as a fallback. � augassigna� Augmented assignment statements ******************************* Augmented assignment is the combination, in a single statement, of a binary operation and an assignment statement: augmented_assignment_stmt ::= augtarget augop (expression_list | yield_expression) augtarget ::= identifier | attributeref | subscription | slicing augop ::= "+=" | "-=" | "*=" | "@=" | "/=" | "//=" | "%=" | "**=" | ">>=" | "<<=" | "&=" | "^=" | "|=" (See section Primaries for the syntax definitions of the last three symbols.) An augmented assignment evaluates the target (which, unlike normal assignment statements, cannot be an unpacking) and the expression list, performs the binary operation specific to the type of assignment on the two operands, and assigns the result to the original target. The target is only evaluated once. An augmented assignment statement like "x += 1" can be rewritten as "x = x + 1" to achieve a similar, but not exactly equal effect. In the augmented version, "x" is only evaluated once. Also, when possible, the actual operation is performed *in-place*, meaning that rather than creating a new object and assigning that to the target, the old object is modified instead. Unlike normal assignments, augmented assignments evaluate the left- hand side *before* evaluating the right-hand side. For example, "a[i] += f(x)" first looks-up "a[i]", then it evaluates "f(x)" and performs the addition, and lastly, it writes the result back to "a[i]". With the exception of assigning to tuples and multiple targets in a single statement, the assignment done by augmented assignment statements is handled the same way as normal assignments. Similarly, with the exception of the possible *in-place* behavior, the binary operation performed by augmented assignment is the same as the normal binary operations. For targets which are attribute references, the same caveat about class and instance attributes applies as for regular assignments. �awaitz�Await expression **************** Suspend the execution of *coroutine* on an *awaitable* object. Can only be used inside a *coroutine function*. await_expr ::= "await" primary Added in version 3.5. �binaryu Binary arithmetic operations **************************** The binary arithmetic operations have the conventional priority levels. Note that some of these operations also apply to certain non- numeric types. Apart from the power operator, there are only two levels, one for multiplicative operators and one for additive operators: m_expr ::= u_expr | m_expr "*" u_expr | m_expr "@" m_expr | m_expr "//" u_expr | m_expr "/" u_expr | m_expr "%" u_expr a_expr ::= m_expr | a_expr "+" m_expr | a_expr "-" m_expr The "*" (multiplication) operator yields the product of its arguments. The arguments must either both be numbers, or one argument must be an integer and the other must be a sequence. In the former case, the numbers are converted to a common type and then multiplied together. In the latter case, sequence repetition is performed; a negative repetition factor yields an empty sequence. This operation can be customized using the special "__mul__()" and "__rmul__()" methods. The "@" (at) operator is intended to be used for matrix multiplication. No builtin Python types implement this operator. This operation can be customized using the special "__matmul__()" and "__rmatmul__()" methods. Added in version 3.5. The "/" (division) and "//" (floor division) operators yield the quotient of their arguments. The numeric arguments are first converted to a common type. Division of integers yields a float, while floor division of integers results in an integer; the result is that of mathematical division with the ‘floor’ function applied to the result. Division by zero raises the "ZeroDivisionError" exception. The division operation can be customized using the special "__truediv__()" and "__rtruediv__()" methods. The floor division operation can be customized using the special "__floordiv__()" and "__rfloordiv__()" methods. The "%" (modulo) operator yields the remainder from the division of the first argument by the second. The numeric arguments are first converted to a common type. A zero right argument raises the "ZeroDivisionError" exception. The arguments may be floating-point numbers, e.g., "3.14%0.7" equals "0.34" (since "3.14" equals "4*0.7 + 0.34".) The modulo operator always yields a result with the same sign as its second operand (or zero); the absolute value of the result is strictly smaller than the absolute value of the second operand [1]. The floor division and modulo operators are connected by the following identity: "x == (x//y)*y + (x%y)". Floor division and modulo are also connected with the built-in function "divmod()": "divmod(x, y) == (x//y, x%y)". [2]. In addition to performing the modulo operation on numbers, the "%" operator is also overloaded by string objects to perform old-style string formatting (also known as interpolation). The syntax for string formatting is described in the Python Library Reference, section printf-style String Formatting. The *modulo* operation can be customized using the special "__mod__()" and "__rmod__()" methods. The floor division operator, the modulo operator, and the "divmod()" function are not defined for complex numbers. Instead, convert to a floating-point number using the "abs()" function if appropriate. The "+" (addition) operator yields the sum of its arguments. The arguments must either both be numbers or both be sequences of the same type. In the former case, the numbers are converted to a common type and then added together. In the latter case, the sequences are concatenated. This operation can be customized using the special "__add__()" and "__radd__()" methods. The "-" (subtraction) operator yields the difference of its arguments. The numeric arguments are first converted to a common type. This operation can be customized using the special "__sub__()" and "__rsub__()" methods. �bitwisea< Binary bitwise operations ************************* Each of the three bitwise operations has a different priority level: and_expr ::= shift_expr | and_expr "&" shift_expr xor_expr ::= and_expr | xor_expr "^" and_expr or_expr ::= xor_expr | or_expr "|" xor_expr The "&" operator yields the bitwise AND of its arguments, which must be integers or one of them must be a custom object overriding "__and__()" or "__rand__()" special methods. The "^" operator yields the bitwise XOR (exclusive OR) of its arguments, which must be integers or one of them must be a custom object overriding "__xor__()" or "__rxor__()" special methods. The "|" operator yields the bitwise (inclusive) OR of its arguments, which must be integers or one of them must be a custom object overriding "__or__()" or "__ror__()" special methods. zbltin-code-objectsu� Code Objects ************ Code objects are used by the implementation to represent “pseudo- compiled” executable Python code such as a function body. They differ from function objects because they don’t contain a reference to their global execution environment. Code objects are returned by the built- in "compile()" function and can be extracted from function objects through their "__code__" attribute. See also the "code" module. Accessing "__code__" raises an auditing event "object.__getattr__" with arguments "obj" and ""__code__"". A code object can be executed or evaluated by passing it (instead of a source string) to the "exec()" or "eval()" built-in functions. See The standard type hierarchy for more information. zbltin-ellipsis-objecta. The Ellipsis Object ******************* This object is commonly used by slicing (see Slicings). It supports no special operations. There is exactly one ellipsis object, named "Ellipsis" (a built-in name). "type(Ellipsis)()" produces the "Ellipsis" singleton. It is written as "Ellipsis" or "...". zbltin-null-objectu The Null Object *************** This object is returned by functions that don’t explicitly return a value. It supports no special operations. There is exactly one null object, named "None" (a built-in name). "type(None)()" produces the same singleton. It is written as "None". zbltin-type-objectsu5 Type Objects ************ Type objects represent the various object types. An object’s type is accessed by the built-in function "type()". There are no special operations on types. The standard module "types" defines names for all standard built-in types. Types are written like this: "<class 'int'>". �booleansa� Boolean operations ****************** or_test ::= and_test | or_test "or" and_test and_test ::= not_test | and_test "and" not_test not_test ::= comparison | "not" not_test In the context of Boolean operations, and also when expressions are used by control flow statements, the following values are interpreted as false: "False", "None", numeric zero of all types, and empty strings and containers (including strings, tuples, lists, dictionaries, sets and frozensets). All other values are interpreted as true. User-defined objects can customize their truth value by providing a "__bool__()" method. The operator "not" yields "True" if its argument is false, "False" otherwise. The expression "x and y" first evaluates *x*; if *x* is false, its value is returned; otherwise, *y* is evaluated and the resulting value is returned. The expression "x or y" first evaluates *x*; if *x* is true, its value is returned; otherwise, *y* is evaluated and the resulting value is returned. Note that neither "and" nor "or" restrict the value and type they return to "False" and "True", but rather return the last evaluated argument. This is sometimes useful, e.g., if "s" is a string that should be replaced by a default value if it is empty, the expression "s or 'foo'" yields the desired value. Because "not" has to create a new value, it returns a boolean value regardless of the type of its argument (for example, "not 'foo'" produces "False" rather than "''".) �breaka$ The "break" statement ********************* break_stmt ::= "break" "break" may only occur syntactically nested in a "for" or "while" loop, but not nested in a function or class definition within that loop. It terminates the nearest enclosing loop, skipping the optional "else" clause if the loop has one. If a "for" loop is terminated by "break", the loop control target keeps its current value. When "break" passes control out of a "try" statement with a "finally" clause, that "finally" clause is executed before really leaving the loop. zcallable-typesu Emulating callable objects ************************** object.__call__(self[, args...]) Called when the instance is “called” as a function; if this method is defined, "x(arg1, arg2, ...)" roughly translates to "type(x).__call__(x, arg1, ...)". �callsu� Calls ***** A call calls a callable object (e.g., a *function*) with a possibly empty series of *arguments*: call ::= primary "(" [argument_list [","] | comprehension] ")" argument_list ::= positional_arguments ["," starred_and_keywords] ["," keywords_arguments] | starred_and_keywords ["," keywords_arguments] | keywords_arguments positional_arguments ::= positional_item ("," positional_item)* positional_item ::= assignment_expression | "*" expression starred_and_keywords ::= ("*" expression | keyword_item) ("," "*" expression | "," keyword_item)* keywords_arguments ::= (keyword_item | "**" expression) ("," keyword_item | "," "**" expression)* keyword_item ::= identifier "=" expression An optional trailing comma may be present after the positional and keyword arguments but does not affect the semantics. The primary must evaluate to a callable object (user-defined functions, built-in functions, methods of built-in objects, class objects, methods of class instances, and all objects having a "__call__()" method are callable). All argument expressions are evaluated before the call is attempted. Please refer to section Function definitions for the syntax of formal *parameter* lists. If keyword arguments are present, they are first converted to positional arguments, as follows. First, a list of unfilled slots is created for the formal parameters. If there are N positional arguments, they are placed in the first N slots. Next, for each keyword argument, the identifier is used to determine the corresponding slot (if the identifier is the same as the first formal parameter name, the first slot is used, and so on). If the slot is already filled, a "TypeError" exception is raised. Otherwise, the argument is placed in the slot, filling it (even if the expression is "None", it fills the slot). When all arguments have been processed, the slots that are still unfilled are filled with the corresponding default value from the function definition. (Default values are calculated, once, when the function is defined; thus, a mutable object such as a list or dictionary used as default value will be shared by all calls that don’t specify an argument value for the corresponding slot; this should usually be avoided.) If there are any unfilled slots for which no default value is specified, a "TypeError" exception is raised. Otherwise, the list of filled slots is used as the argument list for the call. **CPython implementation detail:** An implementation may provide built-in functions whose positional parameters do not have names, even if they are ‘named’ for the purpose of documentation, and which therefore cannot be supplied by keyword. In CPython, this is the case for functions implemented in C that use "PyArg_ParseTuple()" to parse their arguments. If there are more positional arguments than there are formal parameter slots, a "TypeError" exception is raised, unless a formal parameter using the syntax "*identifier" is present; in this case, that formal parameter receives a tuple containing the excess positional arguments (or an empty tuple if there were no excess positional arguments). If any keyword argument does not correspond to a formal parameter name, a "TypeError" exception is raised, unless a formal parameter using the syntax "**identifier" is present; in this case, that formal parameter receives a dictionary containing the excess keyword arguments (using the keywords as keys and the argument values as corresponding values), or a (new) empty dictionary if there were no excess keyword arguments. If the syntax "*expression" appears in the function call, "expression" must evaluate to an *iterable*. Elements from these iterables are treated as if they were additional positional arguments. For the call "f(x1, x2, *y, x3, x4)", if *y* evaluates to a sequence *y1*, …, *yM*, this is equivalent to a call with M+4 positional arguments *x1*, *x2*, *y1*, …, *yM*, *x3*, *x4*. A consequence of this is that although the "*expression" syntax may appear *after* explicit keyword arguments, it is processed *before* the keyword arguments (and any "**expression" arguments – see below). So: >>> def f(a, b): ... print(a, b) ... >>> f(b=1, *(2,)) 2 1 >>> f(a=1, *(2,)) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: f() got multiple values for keyword argument 'a' >>> f(1, *(2,)) 1 2 It is unusual for both keyword arguments and the "*expression" syntax to be used in the same call, so in practice this confusion does not often arise. If the syntax "**expression" appears in the function call, "expression" must evaluate to a *mapping*, the contents of which are treated as additional keyword arguments. If a parameter matching a key has already been given a value (by an explicit keyword argument, or from another unpacking), a "TypeError" exception is raised. When "**expression" is used, each key in this mapping must be a string. Each value from the mapping is assigned to the first formal parameter eligible for keyword assignment whose name is equal to the key. A key need not be a Python identifier (e.g. ""max-temp °F"" is acceptable, although it will not match any formal parameter that could be declared). If there is no match to a formal parameter the key-value pair is collected by the "**" parameter, if there is one, or if there is not, a "TypeError" exception is raised. Formal parameters using the syntax "*identifier" or "**identifier" cannot be used as positional argument slots or as keyword argument names. Changed in version 3.5: Function calls accept any number of "*" and "**" unpackings, positional arguments may follow iterable unpackings ("*"), and keyword arguments may follow dictionary unpackings ("**"). Originally proposed by **PEP 448**. A call always returns some value, possibly "None", unless it raises an exception. How this value is computed depends on the type of the callable object. If it is— a user-defined function: The code block for the function is executed, passing it the argument list. The first thing the code block will do is bind the formal parameters to the arguments; this is described in section Function definitions. When the code block executes a "return" statement, this specifies the return value of the function call. a built-in function or method: The result is up to the interpreter; see Built-in Functions for the descriptions of built-in functions and methods. a class object: A new instance of that class is returned. a class instance method: The corresponding user-defined function is called, with an argument list that is one longer than the argument list of the call: the instance becomes the first argument. a class instance: The class must define a "__call__()" method; the effect is then the same as if that method was called. �classu) Class definitions ***************** A class definition defines a class object (see section The standard type hierarchy): classdef ::= [decorators] "class" classname [type_params] [inheritance] ":" suite inheritance ::= "(" [argument_list] ")" classname ::= identifier A class definition is an executable statement. The inheritance list usually gives a list of base classes (see Metaclasses for more advanced uses), so each item in the list should evaluate to a class object which allows subclassing. Classes without an inheritance list inherit, by default, from the base class "object"; hence, class Foo: pass is equivalent to class Foo(object): pass The class’s suite is then executed in a new execution frame (see Naming and binding), using a newly created local namespace and the original global namespace. (Usually, the suite contains mostly function definitions.) When the class’s suite finishes execution, its execution frame is discarded but its local namespace is saved. [5] A class object is then created using the inheritance list for the base classes and the saved local namespace for the attribute dictionary. The class name is bound to this class object in the original local namespace. The order in which attributes are defined in the class body is preserved in the new class’s "__dict__". Note that this is reliable only right after the class is created and only for classes that were defined using the definition syntax. Class creation can be customized heavily using metaclasses. Classes can also be decorated: just like when decorating functions, @f1(arg) @f2 class Foo: pass is roughly equivalent to class Foo: pass Foo = f1(arg)(f2(Foo)) The evaluation rules for the decorator expressions are the same as for function decorators. The result is then bound to the class name. Changed in version 3.9: Classes may be decorated with any valid "assignment_expression". Previously, the grammar was much more restrictive; see **PEP 614** for details. A list of type parameters may be given in square brackets immediately after the class’s name. This indicates to static type checkers that the class is generic. At runtime, the type parameters can be retrieved from the class’s "__type_params__" attribute. See Generic classes for more. Changed in version 3.12: Type parameter lists are new in Python 3.12. **Programmer’s note:** Variables defined in the class definition are class attributes; they are shared by instances. Instance attributes can be set in a method with "self.name = value". Both class and instance attributes are accessible through the notation “"self.name"”, and an instance attribute hides a class attribute with the same name when accessed in this way. Class attributes can be used as defaults for instance attributes, but using mutable values there can lead to unexpected results. Descriptors can be used to create instance variables with different implementation details. See also: **PEP 3115** - Metaclasses in Python 3000 The proposal that changed the declaration of metaclasses to the current syntax, and the semantics for how classes with metaclasses are constructed. **PEP 3129** - Class Decorators The proposal that added class decorators. Function and method decorators were introduced in **PEP 318**. �comparisonsu( Comparisons *********** Unlike C, all comparison operations in Python have the same priority, which is lower than that of any arithmetic, shifting or bitwise operation. Also unlike C, expressions like "a < b < c" have the interpretation that is conventional in mathematics: comparison ::= or_expr (comp_operator or_expr)* comp_operator ::= "<" | ">" | "==" | ">=" | "<=" | "!=" | "is" ["not"] | ["not"] "in" Comparisons yield boolean values: "True" or "False". Custom *rich comparison methods* may return non-boolean values. In this case Python will call "bool()" on such value in boolean contexts. Comparisons can be chained arbitrarily, e.g., "x < y <= z" is equivalent to "x < y and y <= z", except that "y" is evaluated only once (but in both cases "z" is not evaluated at all when "x < y" is found to be false). Formally, if *a*, *b*, *c*, …, *y*, *z* are expressions and *op1*, *op2*, …, *opN* are comparison operators, then "a op1 b op2 c ... y opN z" is equivalent to "a op1 b and b op2 c and ... y opN z", except that each expression is evaluated at most once. Note that "a op1 b op2 c" doesn’t imply any kind of comparison between *a* and *c*, so that, e.g., "x < y > z" is perfectly legal (though perhaps not pretty). Value comparisons ================= The operators "<", ">", "==", ">=", "<=", and "!=" compare the values of two objects. The objects do not need to have the same type. Chapter Objects, values and types states that objects have a value (in addition to type and identity). The value of an object is a rather abstract notion in Python: For example, there is no canonical access method for an object’s value. Also, there is no requirement that the value of an object should be constructed in a particular way, e.g. comprised of all its data attributes. Comparison operators implement a particular notion of what the value of an object is. One can think of them as defining the value of an object indirectly, by means of their comparison implementation. Because all types are (direct or indirect) subtypes of "object", they inherit the default comparison behavior from "object". Types can customize their comparison behavior by implementing *rich comparison methods* like "__lt__()", described in Basic customization. The default behavior for equality comparison ("==" and "!=") is based on the identity of the objects. Hence, equality comparison of instances with the same identity results in equality, and equality comparison of instances with different identities results in inequality. A motivation for this default behavior is the desire that all objects should be reflexive (i.e. "x is y" implies "x == y"). A default order comparison ("<", ">", "<=", and ">=") is not provided; an attempt raises "TypeError". A motivation for this default behavior is the lack of a similar invariant as for equality. The behavior of the default equality comparison, that instances with different identities are always unequal, may be in contrast to what types will need that have a sensible definition of object value and value-based equality. Such types will need to customize their comparison behavior, and in fact, a number of built-in types have done that. The following list describes the comparison behavior of the most important built-in types. * Numbers of built-in numeric types (Numeric Types — int, float, complex) and of the standard library types "fractions.Fraction" and "decimal.Decimal" can be compared within and across their types, with the restriction that complex numbers do not support order comparison. Within the limits of the types involved, they compare mathematically (algorithmically) correct without loss of precision. The not-a-number values "float('NaN')" and "decimal.Decimal('NaN')" are special. Any ordered comparison of a number to a not-a-number value is false. A counter-intuitive implication is that not-a-number values are not equal to themselves. For example, if "x = float('NaN')", "3 < x", "x < 3" and "x == x" are all false, while "x != x" is true. This behavior is compliant with IEEE 754. * "None" and "NotImplemented" are singletons. **PEP 8** advises that comparisons for singletons should always be done with "is" or "is not", never the equality operators. * Binary sequences (instances of "bytes" or "bytearray") can be compared within and across their types. They compare lexicographically using the numeric values of their elements. * Strings (instances of "str") compare lexicographically using the numerical Unicode code points (the result of the built-in function "ord()") of their characters. [3] Strings and binary sequences cannot be directly compared. * Sequences (instances of "tuple", "list", or "range") can be compared only within each of their types, with the restriction that ranges do not support order comparison. Equality comparison across these types results in inequality, and ordering comparison across these types raises "TypeError". Sequences compare lexicographically using comparison of corresponding elements. The built-in containers typically assume identical objects are equal to themselves. That lets them bypass equality tests for identical objects to improve performance and to maintain their internal invariants. Lexicographical comparison between built-in collections works as follows: * For two collections to compare equal, they must be of the same type, have the same length, and each pair of corresponding elements must compare equal (for example, "[1,2] == (1,2)" is false because the type is not the same). * Collections that support order comparison are ordered the same as their first unequal elements (for example, "[1,2,x] <= [1,2,y]" has the same value as "x <= y"). If a corresponding element does not exist, the shorter collection is ordered first (for example, "[1,2] < [1,2,3]" is true). * Mappings (instances of "dict") compare equal if and only if they have equal "(key, value)" pairs. Equality comparison of the keys and values enforces reflexivity. Order comparisons ("<", ">", "<=", and ">=") raise "TypeError". * Sets (instances of "set" or "frozenset") can be compared within and across their types. They define order comparison operators to mean subset and superset tests. Those relations do not define total orderings (for example, the two sets "{1,2}" and "{2,3}" are not equal, nor subsets of one another, nor supersets of one another). Accordingly, sets are not appropriate arguments for functions which depend on total ordering (for example, "min()", "max()", and "sorted()" produce undefined results given a list of sets as inputs). Comparison of sets enforces reflexivity of its elements. * Most other built-in types have no comparison methods implemented, so they inherit the default comparison behavior. User-defined classes that customize their comparison behavior should follow some consistency rules, if possible: * Equality comparison should be reflexive. In other words, identical objects should compare equal: "x is y" implies "x == y" * Comparison should be symmetric. In other words, the following expressions should have the same result: "x == y" and "y == x" "x != y" and "y != x" "x < y" and "y > x" "x <= y" and "y >= x" * Comparison should be transitive. The following (non-exhaustive) examples illustrate that: "x > y and y > z" implies "x > z" "x < y and y <= z" implies "x < z" * Inverse comparison should result in the boolean negation. In other words, the following expressions should have the same result: "x == y" and "not x != y" "x < y" and "not x >= y" (for total ordering) "x > y" and "not x <= y" (for total ordering) The last two expressions apply to totally ordered collections (e.g. to sequences, but not to sets or mappings). See also the "total_ordering()" decorator. * The "hash()" result should be consistent with equality. Objects that are equal should either have the same hash value, or be marked as unhashable. Python does not enforce these consistency rules. In fact, the not-a-number values are an example for not following these rules. Membership test operations ========================== The operators "in" and "not in" test for membership. "x in s" evaluates to "True" if *x* is a member of *s*, and "False" otherwise. "x not in s" returns the negation of "x in s". All built-in sequences and set types support this as well as dictionary, for which "in" tests whether the dictionary has a given key. For container types such as list, tuple, set, frozenset, dict, or collections.deque, the expression "x in y" is equivalent to "any(x is e or x == e for e in y)". For the string and bytes types, "x in y" is "True" if and only if *x* is a substring of *y*. An equivalent test is "y.find(x) != -1". Empty strings are always considered to be a substring of any other string, so """ in "abc"" will return "True". For user-defined classes which define the "__contains__()" method, "x in y" returns "True" if "y.__contains__(x)" returns a true value, and "False" otherwise. For user-defined classes which do not define "__contains__()" but do define "__iter__()", "x in y" is "True" if some value "z", for which the expression "x is z or x == z" is true, is produced while iterating over "y". If an exception is raised during the iteration, it is as if "in" raised that exception. Lastly, the old-style iteration protocol is tried: if a class defines "__getitem__()", "x in y" is "True" if and only if there is a non- negative integer index *i* such that "x is y[i] or x == y[i]", and no lower integer index raises the "IndexError" exception. (If any other exception is raised, it is as if "in" raised that exception). The operator "not in" is defined to have the inverse truth value of "in". Identity comparisons ==================== The operators "is" and "is not" test for an object’s identity: "x is y" is true if and only if *x* and *y* are the same object. An Object’s identity is determined using the "id()" function. "x is not y" yields the inverse truth value. [4] �compoundu�� Compound statements ******************* Compound statements contain (groups of) other statements; they affect or control the execution of those other statements in some way. In general, compound statements span multiple lines, although in simple incarnations a whole compound statement may be contained in one line. The "if", "while" and "for" statements implement traditional control flow constructs. "try" specifies exception handlers and/or cleanup code for a group of statements, while the "with" statement allows the execution of initialization and finalization code around a block of code. Function and class definitions are also syntactically compound statements. A compound statement consists of one or more ‘clauses.’ A clause consists of a header and a ‘suite.’ The clause headers of a particular compound statement are all at the same indentation level. Each clause header begins with a uniquely identifying keyword and ends with a colon. A suite is a group of statements controlled by a clause. A suite can be one or more semicolon-separated simple statements on the same line as the header, following the header’s colon, or it can be one or more indented statements on subsequent lines. Only the latter form of a suite can contain nested compound statements; the following is illegal, mostly because it wouldn’t be clear to which "if" clause a following "else" clause would belong: if test1: if test2: print(x) Also note that the semicolon binds tighter than the colon in this context, so that in the following example, either all or none of the "print()" calls are executed: if x < y < z: print(x); print(y); print(z) Summarizing: compound_stmt ::= if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | match_stmt | funcdef | classdef | async_with_stmt | async_for_stmt | async_funcdef suite ::= stmt_list NEWLINE | NEWLINE INDENT statement+ DEDENT statement ::= stmt_list NEWLINE | compound_stmt stmt_list ::= simple_stmt (";" simple_stmt)* [";"] Note that statements always end in a "NEWLINE" possibly followed by a "DEDENT". Also note that optional continuation clauses always begin with a keyword that cannot start a statement, thus there are no ambiguities (the ‘dangling "else"’ problem is solved in Python by requiring nested "if" statements to be indented). The formatting of the grammar rules in the following sections places each clause on a separate line for clarity. The "if" statement ================== The "if" statement is used for conditional execution: if_stmt ::= "if" assignment_expression ":" suite ("elif" assignment_expression ":" suite)* ["else" ":" suite] It selects exactly one of the suites by evaluating the expressions one by one until one is found to be true (see section Boolean operations for the definition of true and false); then that suite is executed (and no other part of the "if" statement is executed or evaluated). If all expressions are false, the suite of the "else" clause, if present, is executed. The "while" statement ===================== The "while" statement is used for repeated execution as long as an expression is true: while_stmt ::= "while" assignment_expression ":" suite ["else" ":" suite] This repeatedly tests the expression and, if it is true, executes the first suite; if the expression is false (which may be the first time it is tested) the suite of the "else" clause, if present, is executed and the loop terminates. A "break" statement executed in the first suite terminates the loop without executing the "else" clause’s suite. A "continue" statement executed in the first suite skips the rest of the suite and goes back to testing the expression. The "for" statement =================== The "for" statement is used to iterate over the elements of a sequence (such as a string, tuple or list) or other iterable object: for_stmt ::= "for" target_list "in" starred_list ":" suite ["else" ":" suite] The "starred_list" expression is evaluated once; it should yield an *iterable* object. An *iterator* is created for that iterable. The first item provided by the iterator is then assigned to the target list using the standard rules for assignments (see Assignment statements), and the suite is executed. This repeats for each item provided by the iterator. When the iterator is exhausted, the suite in the "else" clause, if present, is executed, and the loop terminates. A "break" statement executed in the first suite terminates the loop without executing the "else" clause’s suite. A "continue" statement executed in the first suite skips the rest of the suite and continues with the next item, or with the "else" clause if there is no next item. The for-loop makes assignments to the variables in the target list. This overwrites all previous assignments to those variables including those made in the suite of the for-loop: for i in range(10): print(i) i = 5 # this will not affect the for-loop # because i will be overwritten with the next # index in the range Names in the target list are not deleted when the loop is finished, but if the sequence is empty, they will not have been assigned to at all by the loop. Hint: the built-in type "range()" represents immutable arithmetic sequences of integers. For instance, iterating "range(3)" successively yields 0, 1, and then 2. Changed in version 3.11: Starred elements are now allowed in the expression list. The "try" statement =================== The "try" statement specifies exception handlers and/or cleanup code for a group of statements: try_stmt ::= try1_stmt | try2_stmt | try3_stmt try1_stmt ::= "try" ":" suite ("except" [expression ["as" identifier]] ":" suite)+ ["else" ":" suite] ["finally" ":" suite] try2_stmt ::= "try" ":" suite ("except" "*" expression ["as" identifier] ":" suite)+ ["else" ":" suite] ["finally" ":" suite] try3_stmt ::= "try" ":" suite "finally" ":" suite Additional information on exceptions can be found in section Exceptions, and information on using the "raise" statement to generate exceptions may be found in section The raise statement. "except" clause --------------- The "except" clause(s) specify one or more exception handlers. When no exception occurs in the "try" clause, no exception handler is executed. When an exception occurs in the "try" suite, a search for an exception handler is started. This search inspects the "except" clauses in turn until one is found that matches the exception. An expression-less "except" clause, if present, must be last; it matches any exception. For an "except" clause with an expression, the expression must evaluate to an exception type or a tuple of exception types. The raised exception matches an "except" clause whose expression evaluates to the class or a *non-virtual base class* of the exception object, or to a tuple that contains such a class. If no "except" clause matches the exception, the search for an exception handler continues in the surrounding code and on the invocation stack. [1] If the evaluation of an expression in the header of an "except" clause raises an exception, the original search for a handler is canceled and a search starts for the new exception in the surrounding code and on the call stack (it is treated as if the entire "try" statement raised the exception). When a matching "except" clause is found, the exception is assigned to the target specified after the "as" keyword in that "except" clause, if present, and the "except" clause’s suite is executed. All "except" clauses must have an executable block. When the end of this block is reached, execution continues normally after the entire "try" statement. (This means that if two nested handlers exist for the same exception, and the exception occurs in the "try" clause of the inner handler, the outer handler will not handle the exception.) When an exception has been assigned using "as target", it is cleared at the end of the "except" clause. This is as if except E as N: foo was translated to except E as N: try: foo finally: del N This means the exception must be assigned to a different name to be able to refer to it after the "except" clause. Exceptions are cleared because with the traceback attached to them, they form a reference cycle with the stack frame, keeping all locals in that frame alive until the next garbage collection occurs. Before an "except" clause’s suite is executed, the exception is stored in the "sys" module, where it can be accessed from within the body of the "except" clause by calling "sys.exception()". When leaving an exception handler, the exception stored in the "sys" module is reset to its previous value: >>> print(sys.exception()) None >>> try: ... raise TypeError ... except: ... print(repr(sys.exception())) ... try: ... raise ValueError ... except: ... print(repr(sys.exception())) ... print(repr(sys.exception())) ... TypeError() ValueError() TypeError() >>> print(sys.exception()) None "except*" clause ---------------- The "except*" clause(s) are used for handling "ExceptionGroup"s. The exception type for matching is interpreted as in the case of "except", but in the case of exception groups we can have partial matches when the type matches some of the exceptions in the group. This means that multiple "except*" clauses can execute, each handling part of the exception group. Each clause executes at most once and handles an exception group of all matching exceptions. Each exception in the group is handled by at most one "except*" clause, the first that matches it. >>> try: ... raise ExceptionGroup("eg", ... [ValueError(1), TypeError(2), OSError(3), OSError(4)]) ... except* TypeError as e: ... print(f'caught {type(e)} with nested {e.exceptions}') ... except* OSError as e: ... print(f'caught {type(e)} with nested {e.exceptions}') ... caught <class 'ExceptionGroup'> with nested (TypeError(2),) caught <class 'ExceptionGroup'> with nested (OSError(3), OSError(4)) + Exception Group Traceback (most recent call last): | File "<stdin>", line 2, in <module> | ExceptionGroup: eg +-+---------------- 1 ---------------- | ValueError: 1 +------------------------------------ Any remaining exceptions that were not handled by any "except*" clause are re-raised at the end, along with all exceptions that were raised from within the "except*" clauses. If this list contains more than one exception to reraise, they are combined into an exception group. If the raised exception is not an exception group and its type matches one of the "except*" clauses, it is caught and wrapped by an exception group with an empty message string. >>> try: ... raise BlockingIOError ... except* BlockingIOError as e: ... print(repr(e)) ... ExceptionGroup('', (BlockingIOError())) An "except*" clause must have a matching expression; it cannot be "except*:". Furthermore, this expression cannot contain exception group types, because that would have ambiguous semantics. It is not possible to mix "except" and "except*" in the same "try". "break", "continue" and "return" cannot appear in an "except*" clause. "else" clause ------------- The optional "else" clause is executed if the control flow leaves the "try" suite, no exception was raised, and no "return", "continue", or "break" statement was executed. Exceptions in the "else" clause are not handled by the preceding "except" clauses. "finally" clause ---------------- If "finally" is present, it specifies a ‘cleanup’ handler. The "try" clause is executed, including any "except" and "else" clauses. If an exception occurs in any of the clauses and is not handled, the exception is temporarily saved. The "finally" clause is executed. If there is a saved exception it is re-raised at the end of the "finally" clause. If the "finally" clause raises another exception, the saved exception is set as the context of the new exception. If the "finally" clause executes a "return", "break" or "continue" statement, the saved exception is discarded: >>> def f(): ... try: ... 1/0 ... finally: ... return 42 ... >>> f() 42 The exception information is not available to the program during execution of the "finally" clause. When a "return", "break" or "continue" statement is executed in the "try" suite of a "try"…"finally" statement, the "finally" clause is also executed ‘on the way out.’ The return value of a function is determined by the last "return" statement executed. Since the "finally" clause always executes, a "return" statement executed in the "finally" clause will always be the last one executed: >>> def foo(): ... try: ... return 'try' ... finally: ... return 'finally' ... >>> foo() 'finally' Changed in version 3.8: Prior to Python 3.8, a "continue" statement was illegal in the "finally" clause due to a problem with the implementation. The "with" statement ==================== The "with" statement is used to wrap the execution of a block with methods defined by a context manager (see section With Statement Context Managers). This allows common "try"…"except"…"finally" usage patterns to be encapsulated for convenient reuse. with_stmt ::= "with" ( "(" with_stmt_contents ","? ")" | with_stmt_contents ) ":" suite with_stmt_contents ::= with_item ("," with_item)* with_item ::= expression ["as" target] The execution of the "with" statement with one “item” proceeds as follows: 1. The context expression (the expression given in the "with_item") is evaluated to obtain a context manager. 2. The context manager’s "__enter__()" is loaded for later use. 3. The context manager’s "__exit__()" is loaded for later use. 4. The context manager’s "__enter__()" method is invoked. 5. If a target was included in the "with" statement, the return value from "__enter__()" is assigned to it. Note: The "with" statement guarantees that if the "__enter__()" method returns without an error, then "__exit__()" will always be called. Thus, if an error occurs during the assignment to the target list, it will be treated the same as an error occurring within the suite would be. See step 7 below. 6. The suite is executed. 7. The context manager’s "__exit__()" method is invoked. If an exception caused the suite to be exited, its type, value, and traceback are passed as arguments to "__exit__()". Otherwise, three "None" arguments are supplied. If the suite was exited due to an exception, and the return value from the "__exit__()" method was false, the exception is reraised. If the return value was true, the exception is suppressed, and execution continues with the statement following the "with" statement. If the suite was exited for any reason other than an exception, the return value from "__exit__()" is ignored, and execution proceeds at the normal location for the kind of exit that was taken. The following code: with EXPRESSION as TARGET: SUITE is semantically equivalent to: manager = (EXPRESSION) enter = type(manager).__enter__ exit = type(manager).__exit__ value = enter(manager) hit_except = False try: TARGET = value SUITE except: hit_except = True if not exit(manager, *sys.exc_info()): raise finally: if not hit_except: exit(manager, None, None, None) With more than one item, the context managers are processed as if multiple "with" statements were nested: with A() as a, B() as b: SUITE is semantically equivalent to: with A() as a: with B() as b: SUITE You can also write multi-item context managers in multiple lines if the items are surrounded by parentheses. For example: with ( A() as a, B() as b, ): SUITE Changed in version 3.1: Support for multiple context expressions. Changed in version 3.10: Support for using grouping parentheses to break the statement in multiple lines. See also: **PEP 343** - The “with” statement The specification, background, and examples for the Python "with" statement. The "match" statement ===================== Added in version 3.10. The match statement is used for pattern matching. Syntax: match_stmt ::= 'match' subject_expr ":" NEWLINE INDENT case_block+ DEDENT subject_expr ::= star_named_expression "," star_named_expressions? | named_expression case_block ::= 'case' patterns [guard] ":" block Note: This section uses single quotes to denote soft keywords. Pattern matching takes a pattern as input (following "case") and a subject value (following "match"). The pattern (which may contain subpatterns) is matched against the subject value. The outcomes are: * A match success or failure (also termed a pattern success or failure). * Possible binding of matched values to a name. The prerequisites for this are further discussed below. The "match" and "case" keywords are soft keywords. See also: * **PEP 634** – Structural Pattern Matching: Specification * **PEP 636** – Structural Pattern Matching: Tutorial Overview -------- Here’s an overview of the logical flow of a match statement: 1. The subject expression "subject_expr" is evaluated and a resulting subject value obtained. If the subject expression contains a comma, a tuple is constructed using the standard rules. 2. Each pattern in a "case_block" is attempted to match with the subject value. The specific rules for success or failure are described below. The match attempt can also bind some or all of the standalone names within the pattern. The precise pattern binding rules vary per pattern type and are specified below. **Name bindings made during a successful pattern match outlive the executed block and can be used after the match statement**. Note: During failed pattern matches, some subpatterns may succeed. Do not rely on bindings being made for a failed match. Conversely, do not rely on variables remaining unchanged after a failed match. The exact behavior is dependent on implementation and may vary. This is an intentional decision made to allow different implementations to add optimizations. 3. If the pattern succeeds, the corresponding guard (if present) is evaluated. In this case all name bindings are guaranteed to have happened. * If the guard evaluates as true or is missing, the "block" inside "case_block" is executed. * Otherwise, the next "case_block" is attempted as described above. * If there are no further case blocks, the match statement is completed. Note: Users should generally never rely on a pattern being evaluated. Depending on implementation, the interpreter may cache values or use other optimizations which skip repeated evaluations. A sample match statement: >>> flag = False >>> match (100, 200): ... case (100, 300): # Mismatch: 200 != 300 ... print('Case 1') ... case (100, 200) if flag: # Successful match, but guard fails ... print('Case 2') ... case (100, y): # Matches and binds y to 200 ... print(f'Case 3, y: {y}') ... case _: # Pattern not attempted ... print('Case 4, I match anything!') ... Case 3, y: 200 In this case, "if flag" is a guard. Read more about that in the next section. Guards ------ guard ::= "if" named_expression A "guard" (which is part of the "case") must succeed for code inside the "case" block to execute. It takes the form: "if" followed by an expression. The logical flow of a "case" block with a "guard" follows: 1. Check that the pattern in the "case" block succeeded. If the pattern failed, the "guard" is not evaluated and the next "case" block is checked. 2. If the pattern succeeded, evaluate the "guard". * If the "guard" condition evaluates as true, the case block is selected. * If the "guard" condition evaluates as false, the case block is not selected. * If the "guard" raises an exception during evaluation, the exception bubbles up. Guards are allowed to have side effects as they are expressions. Guard evaluation must proceed from the first to the last case block, one at a time, skipping case blocks whose pattern(s) don’t all succeed. (I.e., guard evaluation must happen in order.) Guard evaluation must stop once a case block is selected. Irrefutable Case Blocks ----------------------- An irrefutable case block is a match-all case block. A match statement may have at most one irrefutable case block, and it must be last. A case block is considered irrefutable if it has no guard and its pattern is irrefutable. A pattern is considered irrefutable if we can prove from its syntax alone that it will always succeed. Only the following patterns are irrefutable: * AS Patterns whose left-hand side is irrefutable * OR Patterns containing at least one irrefutable pattern * Capture Patterns * Wildcard Patterns * parenthesized irrefutable patterns Patterns -------- Note: This section uses grammar notations beyond standard EBNF: * the notation "SEP.RULE+" is shorthand for "RULE (SEP RULE)*" * the notation "!RULE" is shorthand for a negative lookahead assertion The top-level syntax for "patterns" is: patterns ::= open_sequence_pattern | pattern pattern ::= as_pattern | or_pattern closed_pattern ::= | literal_pattern | capture_pattern | wildcard_pattern | value_pattern | group_pattern | sequence_pattern | mapping_pattern | class_pattern The descriptions below will include a description “in simple terms” of what a pattern does for illustration purposes (credits to Raymond Hettinger for a document that inspired most of the descriptions). Note that these descriptions are purely for illustration purposes and **may not** reflect the underlying implementation. Furthermore, they do not cover all valid forms. OR Patterns ~~~~~~~~~~~ An OR pattern is two or more patterns separated by vertical bars "|". Syntax: or_pattern ::= "|".closed_pattern+ Only the final subpattern may be irrefutable, and each subpattern must bind the same set of names to avoid ambiguity. An OR pattern matches each of its subpatterns in turn to the subject value, until one succeeds. The OR pattern is then considered successful. Otherwise, if none of the subpatterns succeed, the OR pattern fails. In simple terms, "P1 | P2 | ..." will try to match "P1", if it fails it will try to match "P2", succeeding immediately if any succeeds, failing otherwise. AS Patterns ~~~~~~~~~~~ An AS pattern matches an OR pattern on the left of the "as" keyword against a subject. Syntax: as_pattern ::= or_pattern "as" capture_pattern If the OR pattern fails, the AS pattern fails. Otherwise, the AS pattern binds the subject to the name on the right of the as keyword and succeeds. "capture_pattern" cannot be a "_". In simple terms "P as NAME" will match with "P", and on success it will set "NAME = <subject>". Literal Patterns ~~~~~~~~~~~~~~~~ A literal pattern corresponds to most literals in Python. Syntax: literal_pattern ::= signed_number | signed_number "+" NUMBER | signed_number "-" NUMBER | strings | "None" | "True" | "False" signed_number ::= ["-"] NUMBER The rule "strings" and the token "NUMBER" are defined in the standard Python grammar. Triple-quoted strings are supported. Raw strings and byte strings are supported. f-strings are not supported. The forms "signed_number '+' NUMBER" and "signed_number '-' NUMBER" are for expressing complex numbers; they require a real number on the left and an imaginary number on the right. E.g. "3 + 4j". In simple terms, "LITERAL" will succeed only if "<subject> == LITERAL". For the singletons "None", "True" and "False", the "is" operator is used. Capture Patterns ~~~~~~~~~~~~~~~~ A capture pattern binds the subject value to a name. Syntax: capture_pattern ::= !'_' NAME A single underscore "_" is not a capture pattern (this is what "!'_'" expresses). It is instead treated as a "wildcard_pattern". In a given pattern, a given name can only be bound once. E.g. "case x, x: ..." is invalid while "case [x] | x: ..." is allowed. Capture patterns always succeed. The binding follows scoping rules established by the assignment expression operator in **PEP 572**; the name becomes a local variable in the closest containing function scope unless there’s an applicable "global" or "nonlocal" statement. In simple terms "NAME" will always succeed and it will set "NAME = <subject>". Wildcard Patterns ~~~~~~~~~~~~~~~~~ A wildcard pattern always succeeds (matches anything) and binds no name. Syntax: wildcard_pattern ::= '_' "_" is a soft keyword within any pattern, but only within patterns. It is an identifier, as usual, even within "match" subject expressions, "guard"s, and "case" blocks. In simple terms, "_" will always succeed. Value Patterns ~~~~~~~~~~~~~~ A value pattern represents a named value in Python. Syntax: value_pattern ::= attr attr ::= name_or_attr "." NAME name_or_attr ::= attr | NAME The dotted name in the pattern is looked up using standard Python name resolution rules. The pattern succeeds if the value found compares equal to the subject value (using the "==" equality operator). In simple terms "NAME1.NAME2" will succeed only if "<subject> == NAME1.NAME2" Note: If the same value occurs multiple times in the same match statement, the interpreter may cache the first value found and reuse it rather than repeat the same lookup. This cache is strictly tied to a given execution of a given match statement. Group Patterns ~~~~~~~~~~~~~~ A group pattern allows users to add parentheses around patterns to emphasize the intended grouping. Otherwise, it has no additional syntax. Syntax: group_pattern ::= "(" pattern ")" In simple terms "(P)" has the same effect as "P". Sequence Patterns ~~~~~~~~~~~~~~~~~ A sequence pattern contains several subpatterns to be matched against sequence elements. The syntax is similar to the unpacking of a list or tuple. sequence_pattern ::= "[" [maybe_sequence_pattern] "]" | "(" [open_sequence_pattern] ")" open_sequence_pattern ::= maybe_star_pattern "," [maybe_sequence_pattern] maybe_sequence_pattern ::= ",".maybe_star_pattern+ ","? maybe_star_pattern ::= star_pattern | pattern star_pattern ::= "*" (capture_pattern | wildcard_pattern) There is no difference if parentheses or square brackets are used for sequence patterns (i.e. "(...)" vs "[...]" ). Note: A single pattern enclosed in parentheses without a trailing comma (e.g. "(3 | 4)") is a group pattern. While a single pattern enclosed in square brackets (e.g. "[3 | 4]") is still a sequence pattern. At most one star subpattern may be in a sequence pattern. The star subpattern may occur in any position. If no star subpattern is present, the sequence pattern is a fixed-length sequence pattern; otherwise it is a variable-length sequence pattern. The following is the logical flow for matching a sequence pattern against a subject value: 1. If the subject value is not a sequence [2], the sequence pattern fails. 2. If the subject value is an instance of "str", "bytes" or "bytearray" the sequence pattern fails. 3. The subsequent steps depend on whether the sequence pattern is fixed or variable-length. If the sequence pattern is fixed-length: 1. If the length of the subject sequence is not equal to the number of subpatterns, the sequence pattern fails 2. Subpatterns in the sequence pattern are matched to their corresponding items in the subject sequence from left to right. Matching stops as soon as a subpattern fails. If all subpatterns succeed in matching their corresponding item, the sequence pattern succeeds. Otherwise, if the sequence pattern is variable-length: 1. If the length of the subject sequence is less than the number of non-star subpatterns, the sequence pattern fails. 2. The leading non-star subpatterns are matched to their corresponding items as for fixed-length sequences. 3. If the previous step succeeds, the star subpattern matches a list formed of the remaining subject items, excluding the remaining items corresponding to non-star subpatterns following the star subpattern. 4. Remaining non-star subpatterns are matched to their corresponding subject items, as for a fixed-length sequence. Note: The length of the subject sequence is obtained via "len()" (i.e. via the "__len__()" protocol). This length may be cached by the interpreter in a similar manner as value patterns. In simple terms "[P1, P2, P3," … ", P<N>]" matches only if all the following happens: * check "<subject>" is a sequence * "len(subject) == <N>" * "P1" matches "<subject>[0]" (note that this match can also bind names) * "P2" matches "<subject>[1]" (note that this match can also bind names) * … and so on for the corresponding pattern/element. Mapping Patterns ~~~~~~~~~~~~~~~~ A mapping pattern contains one or more key-value patterns. The syntax is similar to the construction of a dictionary. Syntax: mapping_pattern ::= "{" [items_pattern] "}" items_pattern ::= ",".key_value_pattern+ ","? key_value_pattern ::= (literal_pattern | value_pattern) ":" pattern | double_star_pattern double_star_pattern ::= "**" capture_pattern At most one double star pattern may be in a mapping pattern. The double star pattern must be the last subpattern in the mapping pattern. Duplicate keys in mapping patterns are disallowed. Duplicate literal keys will raise a "SyntaxError". Two keys that otherwise have the same value will raise a "ValueError" at runtime. The following is the logical flow for matching a mapping pattern against a subject value: 1. If the subject value is not a mapping [3],the mapping pattern fails. 2. If every key given in the mapping pattern is present in the subject mapping, and the pattern for each key matches the corresponding item of the subject mapping, the mapping pattern succeeds. 3. If duplicate keys are detected in the mapping pattern, the pattern is considered invalid. A "SyntaxError" is raised for duplicate literal values; or a "ValueError" for named keys of the same value. Note: Key-value pairs are matched using the two-argument form of the mapping subject’s "get()" method. Matched key-value pairs must already be present in the mapping, and not created on-the-fly via "__missing__()" or "__getitem__()". In simple terms "{KEY1: P1, KEY2: P2, ... }" matches only if all the following happens: * check "<subject>" is a mapping * "KEY1 in <subject>" * "P1" matches "<subject>[KEY1]" * … and so on for the corresponding KEY/pattern pair. Class Patterns ~~~~~~~~~~~~~~ A class pattern represents a class and its positional and keyword arguments (if any). Syntax: class_pattern ::= name_or_attr "(" [pattern_arguments ","?] ")" pattern_arguments ::= positional_patterns ["," keyword_patterns] | keyword_patterns positional_patterns ::= ",".pattern+ keyword_patterns ::= ",".keyword_pattern+ keyword_pattern ::= NAME "=" pattern The same keyword should not be repeated in class patterns. The following is the logical flow for matching a class pattern against a subject value: 1. If "name_or_attr" is not an instance of the builtin "type" , raise "TypeError". 2. If the subject value is not an instance of "name_or_attr" (tested via "isinstance()"), the class pattern fails. 3. If no pattern arguments are present, the pattern succeeds. Otherwise, the subsequent steps depend on whether keyword or positional argument patterns are present. For a number of built-in types (specified below), a single positional subpattern is accepted which will match the entire subject; for these types keyword patterns also work as for other types. If only keyword patterns are present, they are processed as follows, one by one: I. The keyword is looked up as an attribute on the subject. * If this raises an exception other than "AttributeError", the exception bubbles up. * If this raises "AttributeError", the class pattern has failed. * Else, the subpattern associated with the keyword pattern is matched against the subject’s attribute value. If this fails, the class pattern fails; if this succeeds, the match proceeds to the next keyword. II. If all keyword patterns succeed, the class pattern succeeds. If any positional patterns are present, they are converted to keyword patterns using the "__match_args__" attribute on the class "name_or_attr" before matching: I. The equivalent of "getattr(cls, "__match_args__", ())" is called. * If this raises an exception, the exception bubbles up. * If the returned value is not a tuple, the conversion fails and "TypeError" is raised. * If there are more positional patterns than "len(cls.__match_args__)", "TypeError" is raised. * Otherwise, positional pattern "i" is converted to a keyword pattern using "__match_args__[i]" as the keyword. "__match_args__[i]" must be a string; if not "TypeError" is raised. * If there are duplicate keywords, "TypeError" is raised. See also: Customizing positional arguments in class pattern matching II. Once all positional patterns have been converted to keyword patterns, the match proceeds as if there were only keyword patterns. For the following built-in types the handling of positional subpatterns is different: * "bool" * "bytearray" * "bytes" * "dict" * "float" * "frozenset" * "int" * "list" * "set" * "str" * "tuple" These classes accept a single positional argument, and the pattern there is matched against the whole object rather than an attribute. For example "int(0|1)" matches the value "0", but not the value "0.0". In simple terms "CLS(P1, attr=P2)" matches only if the following happens: * "isinstance(<subject>, CLS)" * convert "P1" to a keyword pattern using "CLS.__match_args__" * For each keyword argument "attr=P2": * "hasattr(<subject>, "attr")" * "P2" matches "<subject>.attr" * … and so on for the corresponding keyword argument/pattern pair. See also: * **PEP 634** – Structural Pattern Matching: Specification * **PEP 636** – Structural Pattern Matching: Tutorial Function definitions ==================== A function definition defines a user-defined function object (see section The standard type hierarchy): funcdef ::= [decorators] "def" funcname [type_params] "(" [parameter_list] ")" ["->" expression] ":" suite decorators ::= decorator+ decorator ::= "@" assignment_expression NEWLINE parameter_list ::= defparameter ("," defparameter)* "," "/" ["," [parameter_list_no_posonly]] | parameter_list_no_posonly parameter_list_no_posonly ::= defparameter ("," defparameter)* ["," [parameter_list_starargs]] | parameter_list_starargs parameter_list_starargs ::= "*" [parameter] ("," defparameter)* ["," ["**" parameter [","]]] | "**" parameter [","] parameter ::= identifier [":" expression] defparameter ::= parameter ["=" expression] funcname ::= identifier A function definition is an executable statement. Its execution binds the function name in the current local namespace to a function object (a wrapper around the executable code for the function). This function object contains a reference to the current global namespace as the global namespace to be used when the function is called. The function definition does not execute the function body; this gets executed only when the function is called. [4] A function definition may be wrapped by one or more *decorator* expressions. Decorator expressions are evaluated when the function is defined, in the scope that contains the function definition. The result must be a callable, which is invoked with the function object as the only argument. The returned value is bound to the function name instead of the function object. Multiple decorators are applied in nested fashion. For example, the following code @f1(arg) @f2 def func(): pass is roughly equivalent to def func(): pass func = f1(arg)(f2(func)) except that the original function is not temporarily bound to the name "func". Changed in version 3.9: Functions may be decorated with any valid "assignment_expression". Previously, the grammar was much more restrictive; see **PEP 614** for details. A list of type parameters may be given in square brackets between the function’s name and the opening parenthesis for its parameter list. This indicates to static type checkers that the function is generic. At runtime, the type parameters can be retrieved from the function’s "__type_params__" attribute. See Generic functions for more. Changed in version 3.12: Type parameter lists are new in Python 3.12. When one or more *parameters* have the form *parameter* "=" *expression*, the function is said to have “default parameter values.” For a parameter with a default value, the corresponding *argument* may be omitted from a call, in which case the parameter’s default value is substituted. If a parameter has a default value, all following parameters up until the “"*"” must also have a default value — this is a syntactic restriction that is not expressed by the grammar. **Default parameter values are evaluated from left to right when the function definition is executed.** This means that the expression is evaluated once, when the function is defined, and that the same “pre- computed” value is used for each call. This is especially important to understand when a default parameter value is a mutable object, such as a list or a dictionary: if the function modifies the object (e.g. by appending an item to a list), the default parameter value is in effect modified. This is generally not what was intended. A way around this is to use "None" as the default, and explicitly test for it in the body of the function, e.g.: def whats_on_the_telly(penguin=None): if penguin is None: penguin = [] penguin.append("property of the zoo") return penguin Function call semantics are described in more detail in section Calls. A function call always assigns values to all parameters mentioned in the parameter list, either from positional arguments, from keyword arguments, or from default values. If the form “"*identifier"” is present, it is initialized to a tuple receiving any excess positional parameters, defaulting to the empty tuple. If the form “"**identifier"” is present, it is initialized to a new ordered mapping receiving any excess keyword arguments, defaulting to a new empty mapping of the same type. Parameters after “"*"” or “"*identifier"” are keyword-only parameters and may only be passed by keyword arguments. Parameters before “"/"” are positional-only parameters and may only be passed by positional arguments. Changed in version 3.8: The "/" function parameter syntax may be used to indicate positional-only parameters. See **PEP 570** for details. Parameters may have an *annotation* of the form “": expression"” following the parameter name. Any parameter may have an annotation, even those of the form "*identifier" or "**identifier". Functions may have “return” annotation of the form “"-> expression"” after the parameter list. These annotations can be any valid Python expression. The presence of annotations does not change the semantics of a function. The annotation values are available as values of a dictionary keyed by the parameters’ names in the "__annotations__" attribute of the function object. If the "annotations" import from "__future__" is used, annotations are preserved as strings at runtime which enables postponed evaluation. Otherwise, they are evaluated when the function definition is executed. In this case annotations may be evaluated in a different order than they appear in the source code. It is also possible to create anonymous functions (functions not bound to a name), for immediate use in expressions. This uses lambda expressions, described in section Lambdas. Note that the lambda expression is merely a shorthand for a simplified function definition; a function defined in a “"def"” statement can be passed around or assigned to another name just like a function defined by a lambda expression. The “"def"” form is actually more powerful since it allows the execution of multiple statements and annotations. **Programmer’s note:** Functions are first-class objects. A “"def"” statement executed inside a function definition defines a local function that can be returned or passed around. Free variables used in the nested function can access the local variables of the function containing the def. See section Naming and binding for details. See also: **PEP 3107** - Function Annotations The original specification for function annotations. **PEP 484** - Type Hints Definition of a standard meaning for annotations: type hints. **PEP 526** - Syntax for Variable Annotations Ability to type hint variable declarations, including class variables and instance variables. **PEP 563** - Postponed Evaluation of Annotations Support for forward references within annotations by preserving annotations in a string form at runtime instead of eager evaluation. **PEP 318** - Decorators for Functions and Methods Function and method decorators were introduced. Class decorators were introduced in **PEP 3129**. Class definitions ================= A class definition defines a class object (see section The standard type hierarchy): classdef ::= [decorators] "class" classname [type_params] [inheritance] ":" suite inheritance ::= "(" [argument_list] ")" classname ::= identifier A class definition is an executable statement. The inheritance list usually gives a list of base classes (see Metaclasses for more advanced uses), so each item in the list should evaluate to a class object which allows subclassing. Classes without an inheritance list inherit, by default, from the base class "object"; hence, class Foo: pass is equivalent to class Foo(object): pass The class’s suite is then executed in a new execution frame (see Naming and binding), using a newly created local namespace and the original global namespace. (Usually, the suite contains mostly function definitions.) When the class’s suite finishes execution, its execution frame is discarded but its local namespace is saved. [5] A class object is then created using the inheritance list for the base classes and the saved local namespace for the attribute dictionary. The class name is bound to this class object in the original local namespace. The order in which attributes are defined in the class body is preserved in the new class’s "__dict__". Note that this is reliable only right after the class is created and only for classes that were defined using the definition syntax. Class creation can be customized heavily using metaclasses. Classes can also be decorated: just like when decorating functions, @f1(arg) @f2 class Foo: pass is roughly equivalent to class Foo: pass Foo = f1(arg)(f2(Foo)) The evaluation rules for the decorator expressions are the same as for function decorators. The result is then bound to the class name. Changed in version 3.9: Classes may be decorated with any valid "assignment_expression". Previously, the grammar was much more restrictive; see **PEP 614** for details. A list of type parameters may be given in square brackets immediately after the class’s name. This indicates to static type checkers that the class is generic. At runtime, the type parameters can be retrieved from the class’s "__type_params__" attribute. See Generic classes for more. Changed in version 3.12: Type parameter lists are new in Python 3.12. **Programmer’s note:** Variables defined in the class definition are class attributes; they are shared by instances. Instance attributes can be set in a method with "self.name = value". Both class and instance attributes are accessible through the notation “"self.name"”, and an instance attribute hides a class attribute with the same name when accessed in this way. Class attributes can be used as defaults for instance attributes, but using mutable values there can lead to unexpected results. Descriptors can be used to create instance variables with different implementation details. See also: **PEP 3115** - Metaclasses in Python 3000 The proposal that changed the declaration of metaclasses to the current syntax, and the semantics for how classes with metaclasses are constructed. **PEP 3129** - Class Decorators The proposal that added class decorators. Function and method decorators were introduced in **PEP 318**. Coroutines ========== Added in version 3.5. Coroutine function definition ----------------------------- async_funcdef ::= [decorators] "async" "def" funcname "(" [parameter_list] ")" ["->" expression] ":" suite Execution of Python coroutines can be suspended and resumed at many points (see *coroutine*). "await" expressions, "async for" and "async with" can only be used in the body of a coroutine function. Functions defined with "async def" syntax are always coroutine functions, even if they do not contain "await" or "async" keywords. It is a "SyntaxError" to use a "yield from" expression inside the body of a coroutine function. An example of a coroutine function: async def func(param1, param2): do_stuff() await some_coroutine() Changed in version 3.7: "await" and "async" are now keywords; previously they were only treated as such inside the body of a coroutine function. The "async for" statement ------------------------- async_for_stmt ::= "async" for_stmt An *asynchronous iterable* provides an "__aiter__" method that directly returns an *asynchronous iterator*, which can call asynchronous code in its "__anext__" method. The "async for" statement allows convenient iteration over asynchronous iterables. The following code: async for TARGET in ITER: SUITE else: SUITE2 Is semantically equivalent to: iter = (ITER) iter = type(iter).__aiter__(iter) running = True while running: try: TARGET = await type(iter).__anext__(iter) except StopAsyncIteration: running = False else: SUITE else: SUITE2 See also "__aiter__()" and "__anext__()" for details. It is a "SyntaxError" to use an "async for" statement outside the body of a coroutine function. The "async with" statement -------------------------- async_with_stmt ::= "async" with_stmt An *asynchronous context manager* is a *context manager* that is able to suspend execution in its *enter* and *exit* methods. The following code: async with EXPRESSION as TARGET: SUITE is semantically equivalent to: manager = (EXPRESSION) aenter = type(manager).__aenter__ aexit = type(manager).__aexit__ value = await aenter(manager) hit_except = False try: TARGET = value SUITE except: hit_except = True if not await aexit(manager, *sys.exc_info()): raise finally: if not hit_except: await aexit(manager, None, None, None) See also "__aenter__()" and "__aexit__()" for details. It is a "SyntaxError" to use an "async with" statement outside the body of a coroutine function. See also: **PEP 492** - Coroutines with async and await syntax The proposal that made coroutines a proper standalone concept in Python, and added supporting syntax. Type parameter lists ==================== Added in version 3.12. type_params ::= "[" type_param ("," type_param)* "]" type_param ::= typevar | typevartuple | paramspec typevar ::= identifier (":" expression)? typevartuple ::= "*" identifier paramspec ::= "**" identifier Functions (including coroutines), classes and type aliases may contain a type parameter list: def max[T](args: list[T]) -> T: ... async def amax[T](args: list[T]) -> T: ... class Bag[T]: def __iter__(self) -> Iterator[T]: ... def add(self, arg: T) -> None: ... type ListOrSet[T] = list[T] | set[T] Semantically, this indicates that the function, class, or type alias is generic over a type variable. This information is primarily used by static type checkers, and at runtime, generic objects behave much like their non-generic counterparts. Type parameters are declared in square brackets ("[]") immediately after the name of the function, class, or type alias. The type parameters are accessible within the scope of the generic object, but not elsewhere. Thus, after a declaration "def func[T](): pass", the name "T" is not available in the module scope. Below, the semantics of generic objects are described with more precision. The scope of type parameters is modeled with a special function (technically, an annotation scope) that wraps the creation of the generic object. Generic functions, classes, and type aliases have a "__type_params__" attribute listing their type parameters. Type parameters come in three kinds: * "typing.TypeVar", introduced by a plain name (e.g., "T"). Semantically, this represents a single type to a type checker. * "typing.TypeVarTuple", introduced by a name prefixed with a single asterisk (e.g., "*Ts"). Semantically, this stands for a tuple of any number of types. * "typing.ParamSpec", introduced by a name prefixed with two asterisks (e.g., "**P"). Semantically, this stands for the parameters of a callable. "typing.TypeVar" declarations can define *bounds* and *constraints* with a colon (":") followed by an expression. A single expression after the colon indicates a bound (e.g. "T: int"). Semantically, this means that the "typing.TypeVar" can only represent types that are a subtype of this bound. A parenthesized tuple of expressions after the colon indicates a set of constraints (e.g. "T: (str, bytes)"). Each member of the tuple should be a type (again, this is not enforced at runtime). Constrained type variables can only take on one of the types in the list of constraints. For "typing.TypeVar"s declared using the type parameter list syntax, the bound and constraints are not evaluated when the generic object is created, but only when the value is explicitly accessed through the attributes "__bound__" and "__constraints__". To accomplish this, the bounds or constraints are evaluated in a separate annotation scope. "typing.TypeVarTuple"s and "typing.ParamSpec"s cannot have bounds or constraints. The following example indicates the full set of allowed type parameter declarations: def overly_generic[ SimpleTypeVar, TypeVarWithBound: int, TypeVarWithConstraints: (str, bytes), *SimpleTypeVarTuple, **SimpleParamSpec, ]( a: SimpleTypeVar, b: TypeVarWithBound, c: Callable[SimpleParamSpec, TypeVarWithConstraints], *d: SimpleTypeVarTuple, ): ... Generic functions ----------------- Generic functions are declared as follows: def func[T](arg: T): ... This syntax is equivalent to: annotation-def TYPE_PARAMS_OF_func(): T = typing.TypeVar("T") def func(arg: T): ... func.__type_params__ = (T,) return func func = TYPE_PARAMS_OF_func() Here "annotation-def" indicates an annotation scope, which is not actually bound to any name at runtime. (One other liberty is taken in the translation: the syntax does not go through attribute access on the "typing" module, but creates an instance of "typing.TypeVar" directly.) The annotations of generic functions are evaluated within the annotation scope used for declaring the type parameters, but the function’s defaults and decorators are not. The following example illustrates the scoping rules for these cases, as well as for additional flavors of type parameters: @decorator def func[T: int, *Ts, **P](*args: *Ts, arg: Callable[P, T] = some_default): ... Except for the lazy evaluation of the "TypeVar" bound, this is equivalent to: DEFAULT_OF_arg = some_default annotation-def TYPE_PARAMS_OF_func(): annotation-def BOUND_OF_T(): return int # In reality, BOUND_OF_T() is evaluated only on demand. T = typing.TypeVar("T", bound=BOUND_OF_T()) Ts = typing.TypeVarTuple("Ts") P = typing.ParamSpec("P") def func(*args: *Ts, arg: Callable[P, T] = DEFAULT_OF_arg): ... func.__type_params__ = (T, Ts, P) return func func = decorator(TYPE_PARAMS_OF_func()) The capitalized names like "DEFAULT_OF_arg" are not actually bound at runtime. Generic classes --------------- Generic classes are declared as follows: class Bag[T]: ... This syntax is equivalent to: annotation-def TYPE_PARAMS_OF_Bag(): T = typing.TypeVar("T") class Bag(typing.Generic[T]): __type_params__ = (T,) ... return Bag Bag = TYPE_PARAMS_OF_Bag() Here again "annotation-def" (not a real keyword) indicates an annotation scope, and the name "TYPE_PARAMS_OF_Bag" is not actually bound at runtime. Generic classes implicitly inherit from "typing.Generic". The base classes and keyword arguments of generic classes are evaluated within the type scope for the type parameters, and decorators are evaluated outside that scope. This is illustrated by this example: @decorator class Bag(Base[T], arg=T): ... This is equivalent to: annotation-def TYPE_PARAMS_OF_Bag(): T = typing.TypeVar("T") class Bag(Base[T], typing.Generic[T], arg=T): __type_params__ = (T,) ... return Bag Bag = decorator(TYPE_PARAMS_OF_Bag()) Generic type aliases -------------------- The "type" statement can also be used to create a generic type alias: type ListOrSet[T] = list[T] | set[T] Except for the lazy evaluation of the value, this is equivalent to: annotation-def TYPE_PARAMS_OF_ListOrSet(): T = typing.TypeVar("T") annotation-def VALUE_OF_ListOrSet(): return list[T] | set[T] # In reality, the value is lazily evaluated return typing.TypeAliasType("ListOrSet", VALUE_OF_ListOrSet(), type_params=(T,)) ListOrSet = TYPE_PARAMS_OF_ListOrSet() Here, "annotation-def" (not a real keyword) indicates an annotation scope. The capitalized names like "TYPE_PARAMS_OF_ListOrSet" are not actually bound at runtime. -[ Footnotes ]- [1] The exception is propagated to the invocation stack unless there is a "finally" clause which happens to raise another exception. That new exception causes the old one to be lost. [2] In pattern matching, a sequence is defined as one of the following: * a class that inherits from "collections.abc.Sequence" * a Python class that has been registered as "collections.abc.Sequence" * a builtin class that has its (CPython) "Py_TPFLAGS_SEQUENCE" bit set * a class that inherits from any of the above The following standard library classes are sequences: * "array.array" * "collections.deque" * "list" * "memoryview" * "range" * "tuple" Note: Subject values of type "str", "bytes", and "bytearray" do not match sequence patterns. [3] In pattern matching, a mapping is defined as one of the following: * a class that inherits from "collections.abc.Mapping" * a Python class that has been registered as "collections.abc.Mapping" * a builtin class that has its (CPython) "Py_TPFLAGS_MAPPING" bit set * a class that inherits from any of the above The standard library classes "dict" and "types.MappingProxyType" are mappings. [4] A string literal appearing as the first statement in the function body is transformed into the function’s "__doc__" attribute and therefore the function’s *docstring*. [5] A string literal appearing as the first statement in the class body is transformed into the namespace’s "__doc__" item and therefore the class’s *docstring*. zcontext-managersu� With Statement Context Managers ******************************* A *context manager* is an object that defines the runtime context to be established when executing a "with" statement. The context manager handles the entry into, and the exit from, the desired runtime context for the execution of the block of code. Context managers are normally invoked using the "with" statement (described in section The with statement), but can also be used by directly invoking their methods. Typical uses of context managers include saving and restoring various kinds of global state, locking and unlocking resources, closing opened files, etc. For more information on context managers, see Context Manager Types. object.__enter__(self) Enter the runtime context related to this object. The "with" statement will bind this method’s return value to the target(s) specified in the "as" clause of the statement, if any. object.__exit__(self, exc_type, exc_value, traceback) Exit the runtime context related to this object. The parameters describe the exception that caused the context to be exited. If the context was exited without an exception, all three arguments will be "None". If an exception is supplied, and the method wishes to suppress the exception (i.e., prevent it from being propagated), it should return a true value. Otherwise, the exception will be processed normally upon exit from this method. Note that "__exit__()" methods should not reraise the passed-in exception; this is the caller’s responsibility. See also: **PEP 343** - The “with” statement The specification, background, and examples for the Python "with" statement. �continuea� The "continue" statement ************************ continue_stmt ::= "continue" "continue" may only occur syntactically nested in a "for" or "while" loop, but not nested in a function or class definition within that loop. It continues with the next cycle of the nearest enclosing loop. When "continue" passes control out of a "try" statement with a "finally" clause, that "finally" clause is executed before really starting the next loop cycle. �conversionsu� Arithmetic conversions ********************** When a description of an arithmetic operator below uses the phrase “the numeric arguments are converted to a common type”, this means that the operator implementation for built-in types works as follows: * If either argument is a complex number, the other is converted to complex; * otherwise, if either argument is a floating-point number, the other is converted to floating point; * otherwise, both must be integers and no conversion is necessary. Some additional rules apply for certain operators (e.g., a string as a left argument to the ‘%’ operator). Extensions must define their own conversion behavior. � customizationu�6 Basic customization ******************* object.__new__(cls[, ...]) Called to create a new instance of class *cls*. "__new__()" is a static method (special-cased so you need not declare it as such) that takes the class of which an instance was requested as its first argument. The remaining arguments are those passed to the object constructor expression (the call to the class). The return value of "__new__()" should be the new object instance (usually an instance of *cls*). Typical implementations create a new instance of the class by invoking the superclass’s "__new__()" method using "super().__new__(cls[, ...])" with appropriate arguments and then modifying the newly created instance as necessary before returning it. If "__new__()" is invoked during object construction and it returns an instance of *cls*, then the new instance’s "__init__()" method will be invoked like "__init__(self[, ...])", where *self* is the new instance and the remaining arguments are the same as were passed to the object constructor. If "__new__()" does not return an instance of *cls*, then the new instance’s "__init__()" method will not be invoked. "__new__()" is intended mainly to allow subclasses of immutable types (like int, str, or tuple) to customize instance creation. It is also commonly overridden in custom metaclasses in order to customize class creation. object.__init__(self[, ...]) Called after the instance has been created (by "__new__()"), but before it is returned to the caller. The arguments are those passed to the class constructor expression. If a base class has an "__init__()" method, the derived class’s "__init__()" method, if any, must explicitly call it to ensure proper initialization of the base class part of the instance; for example: "super().__init__([args...])". Because "__new__()" and "__init__()" work together in constructing objects ("__new__()" to create it, and "__init__()" to customize it), no non-"None" value may be returned by "__init__()"; doing so will cause a "TypeError" to be raised at runtime. object.__del__(self) Called when the instance is about to be destroyed. This is also called a finalizer or (improperly) a destructor. If a base class has a "__del__()" method, the derived class’s "__del__()" method, if any, must explicitly call it to ensure proper deletion of the base class part of the instance. It is possible (though not recommended!) for the "__del__()" method to postpone destruction of the instance by creating a new reference to it. This is called object *resurrection*. It is implementation-dependent whether "__del__()" is called a second time when a resurrected object is about to be destroyed; the current *CPython* implementation only calls it once. It is not guaranteed that "__del__()" methods are called for objects that still exist when the interpreter exits. "weakref.finalize" provides a straightforward way to register a cleanup function to be called when an object is garbage collected. Note: "del x" doesn’t directly call "x.__del__()" — the former decrements the reference count for "x" by one, and the latter is only called when "x"’s reference count reaches zero. **CPython implementation detail:** It is possible for a reference cycle to prevent the reference count of an object from going to zero. In this case, the cycle will be later detected and deleted by the *cyclic garbage collector*. A common cause of reference cycles is when an exception has been caught in a local variable. The frame’s locals then reference the exception, which references its own traceback, which references the locals of all frames caught in the traceback. See also: Documentation for the "gc" module. Warning: Due to the precarious circumstances under which "__del__()" methods are invoked, exceptions that occur during their execution are ignored, and a warning is printed to "sys.stderr" instead. In particular: * "__del__()" can be invoked when arbitrary code is being executed, including from any arbitrary thread. If "__del__()" needs to take a lock or invoke any other blocking resource, it may deadlock as the resource may already be taken by the code that gets interrupted to execute "__del__()". * "__del__()" can be executed during interpreter shutdown. As a consequence, the global variables it needs to access (including other modules) may already have been deleted or set to "None". Python guarantees that globals whose name begins with a single underscore are deleted from their module before other globals are deleted; if no other references to such globals exist, this may help in assuring that imported modules are still available at the time when the "__del__()" method is called. object.__repr__(self) Called by the "repr()" built-in function to compute the “official” string representation of an object. If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment). If this is not possible, a string of the form "<...some useful description...>" should be returned. The return value must be a string object. If a class defines "__repr__()" but not "__str__()", then "__repr__()" is also used when an “informal” string representation of instances of that class is required. This is typically used for debugging, so it is important that the representation is information-rich and unambiguous. object.__str__(self) Called by "str(object)" and the built-in functions "format()" and "print()" to compute the “informal” or nicely printable string representation of an object. The return value must be a string object. This method differs from "object.__repr__()" in that there is no expectation that "__str__()" return a valid Python expression: a more convenient or concise representation can be used. The default implementation defined by the built-in type "object" calls "object.__repr__()". object.__bytes__(self) Called by bytes to compute a byte-string representation of an object. This should return a "bytes" object. object.__format__(self, format_spec) Called by the "format()" built-in function, and by extension, evaluation of formatted string literals and the "str.format()" method, to produce a “formatted” string representation of an object. The *format_spec* argument is a string that contains a description of the formatting options desired. The interpretation of the *format_spec* argument is up to the type implementing "__format__()", however most classes will either delegate formatting to one of the built-in types, or use a similar formatting option syntax. See Format Specification Mini-Language for a description of the standard formatting syntax. The return value must be a string object. Changed in version 3.4: The __format__ method of "object" itself raises a "TypeError" if passed any non-empty string. Changed in version 3.7: "object.__format__(x, '')" is now equivalent to "str(x)" rather than "format(str(x), '')". object.__lt__(self, other) object.__le__(self, other) object.__eq__(self, other) object.__ne__(self, other) object.__gt__(self, other) object.__ge__(self, other) These are the so-called “rich comparison” methods. The correspondence between operator symbols and method names is as follows: "x<y" calls "x.__lt__(y)", "x<=y" calls "x.__le__(y)", "x==y" calls "x.__eq__(y)", "x!=y" calls "x.__ne__(y)", "x>y" calls "x.__gt__(y)", and "x>=y" calls "x.__ge__(y)". A rich comparison method may return the singleton "NotImplemented" if it does not implement the operation for a given pair of arguments. By convention, "False" and "True" are returned for a successful comparison. However, these methods can return any value, so if the comparison operator is used in a Boolean context (e.g., in the condition of an "if" statement), Python will call "bool()" on the value to determine if the result is true or false. By default, "object" implements "__eq__()" by using "is", returning "NotImplemented" in the case of a false comparison: "True if x is y else NotImplemented". For "__ne__()", by default it delegates to "__eq__()" and inverts the result unless it is "NotImplemented". There are no other implied relationships among the comparison operators or default implementations; for example, the truth of "(x<y or x==y)" does not imply "x<=y". To automatically generate ordering operations from a single root operation, see "functools.total_ordering()". See the paragraph on "__hash__()" for some important notes on creating *hashable* objects which support custom comparison operations and are usable as dictionary keys. There are no swapped-argument versions of these methods (to be used when the left argument does not support the operation but the right argument does); rather, "__lt__()" and "__gt__()" are each other’s reflection, "__le__()" and "__ge__()" are each other’s reflection, and "__eq__()" and "__ne__()" are their own reflection. If the operands are of different types, and the right operand’s type is a direct or indirect subclass of the left operand’s type, the reflected method of the right operand has priority, otherwise the left operand’s method has priority. Virtual subclassing is not considered. When no appropriate method returns any value other than "NotImplemented", the "==" and "!=" operators will fall back to "is" and "is not", respectively. object.__hash__(self) Called by built-in function "hash()" and for operations on members of hashed collections including "set", "frozenset", and "dict". The "__hash__()" method should return an integer. The only required property is that objects which compare equal have the same hash value; it is advised to mix together the hash values of the components of the object that also play a part in comparison of objects by packing them into a tuple and hashing the tuple. Example: def __hash__(self): return hash((self.name, self.nick, self.color)) Note: "hash()" truncates the value returned from an object’s custom "__hash__()" method to the size of a "Py_ssize_t". This is typically 8 bytes on 64-bit builds and 4 bytes on 32-bit builds. If an object’s "__hash__()" must interoperate on builds of different bit sizes, be sure to check the width on all supported builds. An easy way to do this is with "python -c "import sys; print(sys.hash_info.width)"". If a class does not define an "__eq__()" method it should not define a "__hash__()" operation either; if it defines "__eq__()" but not "__hash__()", its instances will not be usable as items in hashable collections. If a class defines mutable objects and implements an "__eq__()" method, it should not implement "__hash__()", since the implementation of *hashable* collections requires that a key’s hash value is immutable (if the object’s hash value changes, it will be in the wrong hash bucket). User-defined classes have "__eq__()" and "__hash__()" methods by default; with them, all objects compare unequal (except with themselves) and "x.__hash__()" returns an appropriate value such that "x == y" implies both that "x is y" and "hash(x) == hash(y)". A class that overrides "__eq__()" and does not define "__hash__()" will have its "__hash__()" implicitly set to "None". When the "__hash__()" method of a class is "None", instances of the class will raise an appropriate "TypeError" when a program attempts to retrieve their hash value, and will also be correctly identified as unhashable when checking "isinstance(obj, collections.abc.Hashable)". If a class that overrides "__eq__()" needs to retain the implementation of "__hash__()" from a parent class, the interpreter must be told this explicitly by setting "__hash__ = <ParentClass>.__hash__". If a class that does not override "__eq__()" wishes to suppress hash support, it should include "__hash__ = None" in the class definition. A class which defines its own "__hash__()" that explicitly raises a "TypeError" would be incorrectly identified as hashable by an "isinstance(obj, collections.abc.Hashable)" call. Note: By default, the "__hash__()" values of str and bytes objects are “salted” with an unpredictable random value. Although they remain constant within an individual Python process, they are not predictable between repeated invocations of Python.This is intended to provide protection against a denial-of-service caused by carefully chosen inputs that exploit the worst case performance of a dict insertion, *O*(*n*^2) complexity. See http://ocert.org/advisories/ocert-2011-003.html for details.Changing hash values affects the iteration order of sets. Python has never made guarantees about this ordering (and it typically varies between 32-bit and 64-bit builds).See also "PYTHONHASHSEED". Changed in version 3.3: Hash randomization is enabled by default. object.__bool__(self) Called to implement truth value testing and the built-in operation "bool()"; should return "False" or "True". When this method is not defined, "__len__()" is called, if it is defined, and the object is considered true if its result is nonzero. If a class defines neither "__len__()" nor "__bool__()", all its instances are considered true. �debuggeru�S "pdb" — The Python Debugger *************************** **Source code:** Lib/pdb.py ====================================================================== The module "pdb" defines an interactive source code debugger for Python programs. It supports setting (conditional) breakpoints and single stepping at the source line level, inspection of stack frames, source code listing, and evaluation of arbitrary Python code in the context of any stack frame. It also supports post-mortem debugging and can be called under program control. The debugger is extensible – it is actually defined as the class "Pdb". This is currently undocumented but easily understood by reading the source. The extension interface uses the modules "bdb" and "cmd". See also: Module "faulthandler" Used to dump Python tracebacks explicitly, on a fault, after a timeout, or on a user signal. Module "traceback" Standard interface to extract, format and print stack traces of Python programs. The typical usage to break into the debugger is to insert: import pdb; pdb.set_trace() Or: breakpoint() at the location you want to break into the debugger, and then run the program. You can then step through the code following this statement, and continue running without the debugger using the "continue" command. Changed in version 3.7: The built-in "breakpoint()", when called with defaults, can be used instead of "import pdb; pdb.set_trace()". def double(x): breakpoint() return x * 2 val = 3 print(f"{val} * 2 is {double(val)}") The debugger’s prompt is "(Pdb)", which is the indicator that you are in debug mode: > ...(3)double() -> return x * 2 (Pdb) p x 3 (Pdb) continue 3 * 2 is 6 Changed in version 3.3: Tab-completion via the "readline" module is available for commands and command arguments, e.g. the current global and local names are offered as arguments of the "p" command. You can also invoke "pdb" from the command line to debug other scripts. For example: python -m pdb myscript.py When invoked as a module, pdb will automatically enter post-mortem debugging if the program being debugged exits abnormally. After post- mortem debugging (or after normal exit of the program), pdb will restart the program. Automatic restarting preserves pdb’s state (such as breakpoints) and in most cases is more useful than quitting the debugger upon program’s exit. Changed in version 3.2: Added the "-c" option to execute commands as if given in a ".pdbrc" file; see Debugger Commands. Changed in version 3.7: Added the "-m" option to execute modules similar to the way "python -m" does. As with a script, the debugger will pause execution just before the first line of the module. Typical usage to execute a statement under control of the debugger is: >>> import pdb >>> def f(x): ... print(1 / x) >>> pdb.run("f(2)") > <string>(1)<module>() (Pdb) continue 0.5 >>> The typical usage to inspect a crashed program is: >>> import pdb >>> def f(x): ... print(1 / x) ... >>> f(0) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in f ZeroDivisionError: division by zero >>> pdb.pm() > <stdin>(2)f() (Pdb) p x 0 (Pdb) The module defines the following functions; each enters the debugger in a slightly different way: pdb.run(statement, globals=None, locals=None) Execute the *statement* (given as a string or a code object) under debugger control. The debugger prompt appears before any code is executed; you can set breakpoints and type "continue", or you can step through the statement using "step" or "next" (all these commands are explained below). The optional *globals* and *locals* arguments specify the environment in which the code is executed; by default the dictionary of the module "__main__" is used. (See the explanation of the built-in "exec()" or "eval()" functions.) pdb.runeval(expression, globals=None, locals=None) Evaluate the *expression* (given as a string or a code object) under debugger control. When "runeval()" returns, it returns the value of the *expression*. Otherwise this function is similar to "run()". pdb.runcall(function, *args, **kwds) Call the *function* (a function or method object, not a string) with the given arguments. When "runcall()" returns, it returns whatever the function call returned. The debugger prompt appears as soon as the function is entered. pdb.set_trace(*, header=None) Enter the debugger at the calling stack frame. This is useful to hard-code a breakpoint at a given point in a program, even if the code is not otherwise being debugged (e.g. when an assertion fails). If given, *header* is printed to the console just before debugging begins. Changed in version 3.7: The keyword-only argument *header*. pdb.post_mortem(traceback=None) Enter post-mortem debugging of the given *traceback* object. If no *traceback* is given, it uses the one of the exception that is currently being handled (an exception must be being handled if the default is to be used). pdb.pm() Enter post-mortem debugging of the traceback found in "sys.last_traceback". The "run*" functions and "set_trace()" are aliases for instantiating the "Pdb" class and calling the method of the same name. If you want to access further features, you have to do this yourself: class pdb.Pdb(completekey='tab', stdin=None, stdout=None, skip=None, nosigint=False, readrc=True) "Pdb" is the debugger class. The *completekey*, *stdin* and *stdout* arguments are passed to the underlying "cmd.Cmd" class; see the description there. The *skip* argument, if given, must be an iterable of glob-style module name patterns. The debugger will not step into frames that originate in a module that matches one of these patterns. [1] By default, Pdb sets a handler for the SIGINT signal (which is sent when the user presses "Ctrl-C" on the console) when you give a "continue" command. This allows you to break into the debugger again by pressing "Ctrl-C". If you want Pdb not to touch the SIGINT handler, set *nosigint* to true. The *readrc* argument defaults to true and controls whether Pdb will load .pdbrc files from the filesystem. Example call to enable tracing with *skip*: import pdb; pdb.Pdb(skip=['django.*']).set_trace() Raises an auditing event "pdb.Pdb" with no arguments. Changed in version 3.1: Added the *skip* parameter. Changed in version 3.2: Added the *nosigint* parameter. Previously, a SIGINT handler was never set by Pdb. Changed in version 3.6: The *readrc* argument. run(statement, globals=None, locals=None) runeval(expression, globals=None, locals=None) runcall(function, *args, **kwds) set_trace() See the documentation for the functions explained above. Debugger Commands ================= The commands recognized by the debugger are listed below. Most commands can be abbreviated to one or two letters as indicated; e.g. "h(elp)" means that either "h" or "help" can be used to enter the help command (but not "he" or "hel", nor "H" or "Help" or "HELP"). Arguments to commands must be separated by whitespace (spaces or tabs). Optional arguments are enclosed in square brackets ("[]") in the command syntax; the square brackets must not be typed. Alternatives in the command syntax are separated by a vertical bar ("|"). Entering a blank line repeats the last command entered. Exception: if the last command was a "list" command, the next 11 lines are listed. Commands that the debugger doesn’t recognize are assumed to be Python statements and are executed in the context of the program being debugged. Python statements can also be prefixed with an exclamation point ("!"). This is a powerful way to inspect the program being debugged; it is even possible to change a variable or call a function. When an exception occurs in such a statement, the exception name is printed but the debugger’s state is not changed. The debugger supports aliases. Aliases can have parameters which allows one a certain level of adaptability to the context under examination. Multiple commands may be entered on a single line, separated by ";;". (A single ";" is not used as it is the separator for multiple commands in a line that is passed to the Python parser.) No intelligence is applied to separating the commands; the input is split at the first ";;" pair, even if it is in the middle of a quoted string. A workaround for strings with double semicolons is to use implicit string concatenation "';'';'" or "";"";"". To set a temporary global variable, use a *convenience variable*. A *convenience variable* is a variable whose name starts with "$". For example, "$foo = 1" sets a global variable "$foo" which you can use in the debugger session. The *convenience variables* are cleared when the program resumes execution so it’s less likely to interfere with your program compared to using normal variables like "foo = 1". There are three preset *convenience variables*: * "$_frame": the current frame you are debugging * "$_retval": the return value if the frame is returning * "$_exception": the exception if the frame is raising an exception Added in version 3.12: Added the *convenience variable* feature. If a file ".pdbrc" exists in the user’s home directory or in the current directory, it is read with "'utf-8'" encoding and executed as if it had been typed at the debugger prompt, with the exception that empty lines and lines starting with "#" are ignored. This is particularly useful for aliases. If both files exist, the one in the home directory is read first and aliases defined there can be overridden by the local file. Changed in version 3.2: ".pdbrc" can now contain commands that continue debugging, such as "continue" or "next". Previously, these commands had no effect. Changed in version 3.11: ".pdbrc" is now read with "'utf-8'" encoding. Previously, it was read with the system locale encoding. h(elp) [command] Without argument, print the list of available commands. With a *command* as argument, print help about that command. "help pdb" displays the full documentation (the docstring of the "pdb" module). Since the *command* argument must be an identifier, "help exec" must be entered to get help on the "!" command. w(here) Print a stack trace, with the most recent frame at the bottom. An arrow (">") indicates the current frame, which determines the context of most commands. d(own) [count] Move the current frame *count* (default one) levels down in the stack trace (to a newer frame). u(p) [count] Move the current frame *count* (default one) levels up in the stack trace (to an older frame). b(reak) [([filename:]lineno | function) [, condition]] With a *lineno* argument, set a break there in the current file. With a *function* argument, set a break at the first executable statement within that function. The line number may be prefixed with a filename and a colon, to specify a breakpoint in another file (probably one that hasn’t been loaded yet). The file is searched on "sys.path". Note that each breakpoint is assigned a number to which all the other breakpoint commands refer. If a second argument is present, it is an expression which must evaluate to true before the breakpoint is honored. Without argument, list all breaks, including for each breakpoint, the number of times that breakpoint has been hit, the current ignore count, and the associated condition if any. tbreak [([filename:]lineno | function) [, condition]] Temporary breakpoint, which is removed automatically when it is first hit. The arguments are the same as for "break". cl(ear) [filename:lineno | bpnumber ...] With a *filename:lineno* argument, clear all the breakpoints at this line. With a space separated list of breakpoint numbers, clear those breakpoints. Without argument, clear all breaks (but first ask confirmation). disable bpnumber [bpnumber ...] Disable the breakpoints given as a space separated list of breakpoint numbers. Disabling a breakpoint means it cannot cause the program to stop execution, but unlike clearing a breakpoint, it remains in the list of breakpoints and can be (re-)enabled. enable bpnumber [bpnumber ...] Enable the breakpoints specified. ignore bpnumber [count] Set the ignore count for the given breakpoint number. If *count* is omitted, the ignore count is set to 0. A breakpoint becomes active when the ignore count is zero. When non-zero, the *count* is decremented each time the breakpoint is reached and the breakpoint is not disabled and any associated condition evaluates to true. condition bpnumber [condition] Set a new *condition* for the breakpoint, an expression which must evaluate to true before the breakpoint is honored. If *condition* is absent, any existing condition is removed; i.e., the breakpoint is made unconditional. commands [bpnumber] Specify a list of commands for breakpoint number *bpnumber*. The commands themselves appear on the following lines. Type a line containing just "end" to terminate the commands. An example: (Pdb) commands 1 (com) p some_variable (com) end (Pdb) To remove all commands from a breakpoint, type "commands" and follow it immediately with "end"; that is, give no commands. With no *bpnumber* argument, "commands" refers to the last breakpoint set. You can use breakpoint commands to start your program up again. Simply use the "continue" command, or "step", or any other command that resumes execution. Specifying any command resuming execution (currently "continue", "step", "next", "return", "jump", "quit" and their abbreviations) terminates the command list (as if that command was immediately followed by end). This is because any time you resume execution (even with a simple next or step), you may encounter another breakpoint—which could have its own command list, leading to ambiguities about which list to execute. If you use the "silent" command in the command list, the usual message about stopping at a breakpoint is not printed. This may be desirable for breakpoints that are to print a specific message and then continue. If none of the other commands print anything, you see no sign that the breakpoint was reached. s(tep) Execute the current line, stop at the first possible occasion (either in a function that is called or on the next line in the current function). n(ext) Continue execution until the next line in the current function is reached or it returns. (The difference between "next" and "step" is that "step" stops inside a called function, while "next" executes called functions at (nearly) full speed, only stopping at the next line in the current function.) unt(il) [lineno] Without argument, continue execution until the line with a number greater than the current one is reached. With *lineno*, continue execution until a line with a number greater or equal to *lineno* is reached. In both cases, also stop when the current frame returns. Changed in version 3.2: Allow giving an explicit line number. r(eturn) Continue execution until the current function returns. c(ont(inue)) Continue execution, only stop when a breakpoint is encountered. j(ump) lineno Set the next line that will be executed. Only available in the bottom-most frame. This lets you jump back and execute code again, or jump forward to skip code that you don’t want to run. It should be noted that not all jumps are allowed – for instance it is not possible to jump into the middle of a "for" loop or out of a "finally" clause. l(ist) [first[, last]] List source code for the current file. Without arguments, list 11 lines around the current line or continue the previous listing. With "." as argument, list 11 lines around the current line. With one argument, list 11 lines around at that line. With two arguments, list the given range; if the second argument is less than the first, it is interpreted as a count. The current line in the current frame is indicated by "->". If an exception is being debugged, the line where the exception was originally raised or propagated is indicated by ">>", if it differs from the current line. Changed in version 3.2: Added the ">>" marker. ll | longlist List all source code for the current function or frame. Interesting lines are marked as for "list". Added in version 3.2. a(rgs) Print the arguments of the current function and their current values. p expression Evaluate *expression* in the current context and print its value. Note: "print()" can also be used, but is not a debugger command — this executes the Python "print()" function. pp expression Like the "p" command, except the value of *expression* is pretty- printed using the "pprint" module. whatis expression Print the type of *expression*. source expression Try to get source code of *expression* and display it. Added in version 3.2. display [expression] Display the value of *expression* if it changed, each time execution stops in the current frame. Without *expression*, list all display expressions for the current frame. Note: Display evaluates *expression* and compares to the result of the previous evaluation of *expression*, so when the result is mutable, display may not be able to pick up the changes. Example: lst = [] breakpoint() pass lst.append(1) print(lst) Display won’t realize "lst" has been changed because the result of evaluation is modified in place by "lst.append(1)" before being compared: > example.py(3)<module>() -> pass (Pdb) display lst display lst: [] (Pdb) n > example.py(4)<module>() -> lst.append(1) (Pdb) n > example.py(5)<module>() -> print(lst) (Pdb) You can do some tricks with copy mechanism to make it work: > example.py(3)<module>() -> pass (Pdb) display lst[:] display lst[:]: [] (Pdb) n > example.py(4)<module>() -> lst.append(1) (Pdb) n > example.py(5)<module>() -> print(lst) display lst[:]: [1] [old: []] (Pdb) Added in version 3.2. undisplay [expression] Do not display *expression* anymore in the current frame. Without *expression*, clear all display expressions for the current frame. Added in version 3.2. interact Start an interactive interpreter (using the "code" module) whose global namespace contains all the (global and local) names found in the current scope. Added in version 3.2. alias [name [command]] Create an alias called *name* that executes *command*. The *command* must *not* be enclosed in quotes. Replaceable parameters can be indicated by "%1", "%2", and so on, while "%*" is replaced by all the parameters. If *command* is omitted, the current alias for *name* is shown. If no arguments are given, all aliases are listed. Aliases may be nested and can contain anything that can be legally typed at the pdb prompt. Note that internal pdb commands *can* be overridden by aliases. Such a command is then hidden until the alias is removed. Aliasing is recursively applied to the first word of the command line; all other words in the line are left alone. As an example, here are two useful aliases (especially when placed in the ".pdbrc" file): # Print instance variables (usage "pi classInst") alias pi for k in %1.__dict__.keys(): print(f"%1.{k} = {%1.__dict__[k]}") # Print instance variables in self alias ps pi self unalias name Delete the specified alias *name*. ! statement Execute the (one-line) *statement* in the context of the current stack frame. The exclamation point can be omitted unless the first word of the statement resembles a debugger command, e.g.: (Pdb) ! n=42 (Pdb) To set a global variable, you can prefix the assignment command with a "global" statement on the same line, e.g.: (Pdb) global list_options; list_options = ['-l'] (Pdb) run [args ...] restart [args ...] Restart the debugged Python program. If *args* is supplied, it is split with "shlex" and the result is used as the new "sys.argv". History, breakpoints, actions and debugger options are preserved. "restart" is an alias for "run". q(uit) Quit from the debugger. The program being executed is aborted. debug code Enter a recursive debugger that steps through *code* (which is an arbitrary expression or statement to be executed in the current environment). retval Print the return value for the last return of the current function. -[ Footnotes ]- [1] Whether a frame is considered to originate in a certain module is determined by the "__name__" in the frame globals. �dela� The "del" statement ******************* del_stmt ::= "del" target_list Deletion is recursively defined very similar to the way assignment is defined. Rather than spelling it out in full details, here are some hints. Deletion of a target list recursively deletes each target, from left to right. Deletion of a name removes the binding of that name from the local or global namespace, depending on whether the name occurs in a "global" statement in the same code block. If the name is unbound, a "NameError" exception will be raised. Deletion of attribute references, subscriptions and slicings is passed to the primary object involved; deletion of a slicing is in general equivalent to assignment of an empty slice of the right type (but even this is determined by the sliced object). Changed in version 3.2: Previously it was illegal to delete a name from the local namespace if it occurs as a free variable in a nested block. �dictu Dictionary displays ******************* A dictionary display is a possibly empty series of dict items (key/value pairs) enclosed in curly braces: dict_display ::= "{" [dict_item_list | dict_comprehension] "}" dict_item_list ::= dict_item ("," dict_item)* [","] dict_item ::= expression ":" expression | "**" or_expr dict_comprehension ::= expression ":" expression comp_for A dictionary display yields a new dictionary object. If a comma-separated sequence of dict items is given, they are evaluated from left to right to define the entries of the dictionary: each key object is used as a key into the dictionary to store the corresponding value. This means that you can specify the same key multiple times in the dict item list, and the final dictionary’s value for that key will be the last one given. A double asterisk "**" denotes *dictionary unpacking*. Its operand must be a *mapping*. Each mapping item is added to the new dictionary. Later values replace values already set by earlier dict items and earlier dictionary unpackings. Added in version 3.5: Unpacking into dictionary displays, originally proposed by **PEP 448**. A dict comprehension, in contrast to list and set comprehensions, needs two expressions separated with a colon followed by the usual “for” and “if” clauses. When the comprehension is run, the resulting key and value elements are inserted in the new dictionary in the order they are produced. Restrictions on the types of the key values are listed earlier in section The standard type hierarchy. (To summarize, the key type should be *hashable*, which excludes all mutable objects.) Clashes between duplicate keys are not detected; the last value (textually rightmost in the display) stored for a given key value prevails. Changed in version 3.8: Prior to Python 3.8, in dict comprehensions, the evaluation order of key and value was not well-defined. In CPython, the value was evaluated before the key. Starting with 3.8, the key is evaluated before the value, as proposed by **PEP 572**. zdynamic-featuresa� Interaction with dynamic features ********************************* Name resolution of free variables occurs at runtime, not at compile time. This means that the following code will print 42: i = 10 def f(): print(i) i = 42 f() The "eval()" and "exec()" functions do not have access to the full environment for resolving names. Names may be resolved in the local and global namespaces of the caller. Free variables are not resolved in the nearest enclosing namespace, but in the global namespace. [1] The "exec()" and "eval()" functions have optional arguments to override the global and local namespace. If only one namespace is specified, it is used for both. �elseaX The "if" statement ****************** The "if" statement is used for conditional execution: if_stmt ::= "if" assignment_expression ":" suite ("elif" assignment_expression ":" suite)* ["else" ":" suite] It selects exactly one of the suites by evaluating the expressions one by one until one is found to be true (see section Boolean operations for the definition of true and false); then that suite is executed (and no other part of the "if" statement is executed or evaluated). If all expressions are false, the suite of the "else" clause, if present, is executed. � exceptionsu� Exceptions ********** Exceptions are a means of breaking out of the normal flow of control of a code block in order to handle errors or other exceptional conditions. An exception is *raised* at the point where the error is detected; it may be *handled* by the surrounding code block or by any code block that directly or indirectly invoked the code block where the error occurred. The Python interpreter raises an exception when it detects a run-time error (such as division by zero). A Python program can also explicitly raise an exception with the "raise" statement. Exception handlers are specified with the "try" … "except" statement. The "finally" clause of such a statement can be used to specify cleanup code which does not handle the exception, but is executed whether an exception occurred or not in the preceding code. Python uses the “termination” model of error handling: an exception handler can find out what happened and continue execution at an outer level, but it cannot repair the cause of the error and retry the failing operation (except by re-entering the offending piece of code from the top). When an exception is not handled at all, the interpreter terminates execution of the program, or returns to its interactive main loop. In either case, it prints a stack traceback, except when the exception is "SystemExit". Exceptions are identified by class instances. The "except" clause is selected depending on the class of the instance: it must reference the class of the instance or a *non-virtual base class* thereof. The instance can be received by the handler and can carry additional information about the exceptional condition. Note: Exception messages are not part of the Python API. Their contents may change from one version of Python to the next without warning and should not be relied on by code which will run under multiple versions of the interpreter. See also the description of the "try" statement in section The try statement and "raise" statement in section The raise statement. -[ Footnotes ]- [1] This limitation occurs because the code that is executed by these operations is not available at the time the module is compiled. � execmodelu�5 Execution model *************** Structure of a program ====================== A Python program is constructed from code blocks. A *block* is a piece of Python program text that is executed as a unit. The following are blocks: a module, a function body, and a class definition. Each command typed interactively is a block. A script file (a file given as standard input to the interpreter or specified as a command line argument to the interpreter) is a code block. A script command (a command specified on the interpreter command line with the "-c" option) is a code block. A module run as a top level script (as module "__main__") from the command line using a "-m" argument is also a code block. The string argument passed to the built-in functions "eval()" and "exec()" is a code block. A code block is executed in an *execution frame*. A frame contains some administrative information (used for debugging) and determines where and how execution continues after the code block’s execution has completed. Naming and binding ================== Binding of names ---------------- *Names* refer to objects. Names are introduced by name binding operations. The following constructs bind names: * formal parameters to functions, * class definitions, * function definitions, * assignment expressions, * targets that are identifiers if occurring in an assignment: * "for" loop header, * after "as" in a "with" statement, "except" clause, "except*" clause, or in the as-pattern in structural pattern matching, * in a capture pattern in structural pattern matching * "import" statements. * "type" statements. * type parameter lists. The "import" statement of the form "from ... import *" binds all names defined in the imported module, except those beginning with an underscore. This form may only be used at the module level. A target occurring in a "del" statement is also considered bound for this purpose (though the actual semantics are to unbind the name). Each assignment or import statement occurs within a block defined by a class or function definition or at the module level (the top-level code block). If a name is bound in a block, it is a local variable of that block, unless declared as "nonlocal" or "global". If a name is bound at the module level, it is a global variable. (The variables of the module code block are local and global.) If a variable is used in a code block but not defined there, it is a *free variable*. Each occurrence of a name in the program text refers to the *binding* of that name established by the following name resolution rules. Resolution of names ------------------- A *scope* defines the visibility of a name within a block. If a local variable is defined in a block, its scope includes that block. If the definition occurs in a function block, the scope extends to any blocks contained within the defining one, unless a contained block introduces a different binding for the name. When a name is used in a code block, it is resolved using the nearest enclosing scope. The set of all such scopes visible to a code block is called the block’s *environment*. When a name is not found at all, a "NameError" exception is raised. If the current scope is a function scope, and the name refers to a local variable that has not yet been bound to a value at the point where the name is used, an "UnboundLocalError" exception is raised. "UnboundLocalError" is a subclass of "NameError". If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. This can lead to errors when a name is used within a block before it is bound. This rule is subtle. Python lacks declarations and allows name binding operations to occur anywhere within a code block. The local variables of a code block can be determined by scanning the entire text of the block for name binding operations. See the FAQ entry on UnboundLocalError for examples. If the "global" statement occurs within a block, all uses of the names specified in the statement refer to the bindings of those names in the top-level namespace. Names are resolved in the top-level namespace by searching the global namespace, i.e. the namespace of the module containing the code block, and the builtins namespace, the namespace of the module "builtins". The global namespace is searched first. If the names are not found there, the builtins namespace is searched next. If the names are also not found in the builtins namespace, new variables are created in the global namespace. The global statement must precede all uses of the listed names. The "global" statement has the same scope as a name binding operation in the same block. If the nearest enclosing scope for a free variable contains a global statement, the free variable is treated as a global. The "nonlocal" statement causes corresponding names to refer to previously bound variables in the nearest enclosing function scope. "SyntaxError" is raised at compile time if the given name does not exist in any enclosing function scope. Type parameters cannot be rebound with the "nonlocal" statement. The namespace for a module is automatically created the first time a module is imported. The main module for a script is always called "__main__". Class definition blocks and arguments to "exec()" and "eval()" are special in the context of name resolution. A class definition is an executable statement that may use and define names. These references follow the normal rules for name resolution with an exception that unbound local variables are looked up in the global namespace. The namespace of the class definition becomes the attribute dictionary of the class. The scope of names defined in a class block is limited to the class block; it does not extend to the code blocks of methods. This includes comprehensions and generator expressions, but it does not include annotation scopes, which have access to their enclosing class scopes. This means that the following will fail: class A: a = 42 b = list(a + i for i in range(10)) However, the following will succeed: class A: type Alias = Nested class Nested: pass print(A.Alias.__value__) # <type 'A.Nested'> Annotation scopes ----------------- Type parameter lists and "type" statements introduce *annotation scopes*, which behave mostly like function scopes, but with some exceptions discussed below. *Annotations* currently do not use annotation scopes, but they are expected to use annotation scopes in Python 3.13 when **PEP 649** is implemented. Annotation scopes are used in the following contexts: * Type parameter lists for generic type aliases. * Type parameter lists for generic functions. A generic function’s annotations are executed within the annotation scope, but its defaults and decorators are not. * Type parameter lists for generic classes. A generic class’s base classes and keyword arguments are executed within the annotation scope, but its decorators are not. * The bounds and constraints for type variables (lazily evaluated). * The value of type aliases (lazily evaluated). Annotation scopes differ from function scopes in the following ways: * Annotation scopes have access to their enclosing class namespace. If an annotation scope is immediately within a class scope, or within another annotation scope that is immediately within a class scope, the code in the annotation scope can use names defined in the class scope as if it were executed directly within the class body. This contrasts with regular functions defined within classes, which cannot access names defined in the class scope. * Expressions in annotation scopes cannot contain "yield", "yield from", "await", or ":=" expressions. (These expressions are allowed in other scopes contained within the annotation scope.) * Names defined in annotation scopes cannot be rebound with "nonlocal" statements in inner scopes. This includes only type parameters, as no other syntactic elements that can appear within annotation scopes can introduce new names. * While annotation scopes have an internal name, that name is not reflected in the *__qualname__* of objects defined within the scope. Instead, the "__qualname__" of such objects is as if the object were defined in the enclosing scope. Added in version 3.12: Annotation scopes were introduced in Python 3.12 as part of **PEP 695**. Lazy evaluation --------------- The values of type aliases created through the "type" statement are *lazily evaluated*. The same applies to the bounds and constraints of type variables created through the type parameter syntax. This means that they are not evaluated when the type alias or type variable is created. Instead, they are only evaluated when doing so is necessary to resolve an attribute access. Example: >>> type Alias = 1/0 >>> Alias.__value__ Traceback (most recent call last): ... ZeroDivisionError: division by zero >>> def func[T: 1/0](): pass >>> T = func.__type_params__[0] >>> T.__bound__ Traceback (most recent call last): ... ZeroDivisionError: division by zero Here the exception is raised only when the "__value__" attribute of the type alias or the "__bound__" attribute of the type variable is accessed. This behavior is primarily useful for references to types that have not yet been defined when the type alias or type variable is created. For example, lazy evaluation enables creation of mutually recursive type aliases: from typing import Literal type SimpleExpr = int | Parenthesized type Parenthesized = tuple[Literal["("], Expr, Literal[")"]] type Expr = SimpleExpr | tuple[SimpleExpr, Literal["+", "-"], Expr] Lazily evaluated values are evaluated in annotation scope, which means that names that appear inside the lazily evaluated value are looked up as if they were used in the immediately enclosing scope. Added in version 3.12. Builtins and restricted execution --------------------------------- **CPython implementation detail:** Users should not touch "__builtins__"; it is strictly an implementation detail. Users wanting to override values in the builtins namespace should "import" the "builtins" module and modify its attributes appropriately. The builtins namespace associated with the execution of a code block is actually found by looking up the name "__builtins__" in its global namespace; this should be a dictionary or a module (in the latter case the module’s dictionary is used). By default, when in the "__main__" module, "__builtins__" is the built-in module "builtins"; when in any other module, "__builtins__" is an alias for the dictionary of the "builtins" module itself. Interaction with dynamic features --------------------------------- Name resolution of free variables occurs at runtime, not at compile time. This means that the following code will print 42: i = 10 def f(): print(i) i = 42 f() The "eval()" and "exec()" functions do not have access to the full environment for resolving names. Names may be resolved in the local and global namespaces of the caller. Free variables are not resolved in the nearest enclosing namespace, but in the global namespace. [1] The "exec()" and "eval()" functions have optional arguments to override the global and local namespace. If only one namespace is specified, it is used for both. Exceptions ========== Exceptions are a means of breaking out of the normal flow of control of a code block in order to handle errors or other exceptional conditions. An exception is *raised* at the point where the error is detected; it may be *handled* by the surrounding code block or by any code block that directly or indirectly invoked the code block where the error occurred. The Python interpreter raises an exception when it detects a run-time error (such as division by zero). A Python program can also explicitly raise an exception with the "raise" statement. Exception handlers are specified with the "try" … "except" statement. The "finally" clause of such a statement can be used to specify cleanup code which does not handle the exception, but is executed whether an exception occurred or not in the preceding code. Python uses the “termination” model of error handling: an exception handler can find out what happened and continue execution at an outer level, but it cannot repair the cause of the error and retry the failing operation (except by re-entering the offending piece of code from the top). When an exception is not handled at all, the interpreter terminates execution of the program, or returns to its interactive main loop. In either case, it prints a stack traceback, except when the exception is "SystemExit". Exceptions are identified by class instances. The "except" clause is selected depending on the class of the instance: it must reference the class of the instance or a *non-virtual base class* thereof. The instance can be received by the handler and can carry additional information about the exceptional condition. Note: Exception messages are not part of the Python API. Their contents may change from one version of Python to the next without warning and should not be relied on by code which will run under multiple versions of the interpreter. See also the description of the "try" statement in section The try statement and "raise" statement in section The raise statement. -[ Footnotes ]- [1] This limitation occurs because the code that is executed by these operations is not available at the time the module is compiled. � exprlistsur Expression lists **************** expression_list ::= expression ("," expression)* [","] starred_list ::= starred_item ("," starred_item)* [","] starred_expression ::= expression | (starred_item ",")* [starred_item] starred_item ::= assignment_expression | "*" or_expr Except when part of a list or set display, an expression list containing at least one comma yields a tuple. The length of the tuple is the number of expressions in the list. The expressions are evaluated from left to right. An asterisk "*" denotes *iterable unpacking*. Its operand must be an *iterable*. The iterable is expanded into a sequence of items, which are included in the new tuple, list, or set, at the site of the unpacking. Added in version 3.5: Iterable unpacking in expression lists, originally proposed by **PEP 448**. A trailing comma is required only to create a one-item tuple, such as "1,"; it is optional in all other cases. A single expression without a trailing comma doesn’t create a tuple, but rather yields the value of that expression. (To create an empty tuple, use an empty pair of parentheses: "()".) �floatinga� Floating-point literals *********************** Floating-point literals are described by the following lexical definitions: floatnumber ::= pointfloat | exponentfloat pointfloat ::= [digitpart] fraction | digitpart "." exponentfloat ::= (digitpart | pointfloat) exponent digitpart ::= digit (["_"] digit)* fraction ::= "." digitpart exponent ::= ("e" | "E") ["+" | "-"] digitpart Note that the integer and exponent parts are always interpreted using radix 10. For example, "077e010" is legal, and denotes the same number as "77e10". The allowed range of floating-point literals is implementation-dependent. As in integer literals, underscores are supported for digit grouping. Some examples of floating-point literals: 3.14 10. .001 1e100 3.14e-10 0e0 3.14_15_93 Changed in version 3.6: Underscores are now allowed for grouping purposes in literals. �foru" The "for" statement ******************* The "for" statement is used to iterate over the elements of a sequence (such as a string, tuple or list) or other iterable object: for_stmt ::= "for" target_list "in" starred_list ":" suite ["else" ":" suite] The "starred_list" expression is evaluated once; it should yield an *iterable* object. An *iterator* is created for that iterable. The first item provided by the iterator is then assigned to the target list using the standard rules for assignments (see Assignment statements), and the suite is executed. This repeats for each item provided by the iterator. When the iterator is exhausted, the suite in the "else" clause, if present, is executed, and the loop terminates. A "break" statement executed in the first suite terminates the loop without executing the "else" clause’s suite. A "continue" statement executed in the first suite skips the rest of the suite and continues with the next item, or with the "else" clause if there is no next item. The for-loop makes assignments to the variables in the target list. This overwrites all previous assignments to those variables including those made in the suite of the for-loop: for i in range(10): print(i) i = 5 # this will not affect the for-loop # because i will be overwritten with the next # index in the range Names in the target list are not deleted when the loop is finished, but if the sequence is empty, they will not have been assigned to at all by the loop. Hint: the built-in type "range()" represents immutable arithmetic sequences of integers. For instance, iterating "range(3)" successively yields 0, 1, and then 2. Changed in version 3.11: Starred elements are now allowed in the expression list. � formatstringsu�b Format String Syntax ******************** The "str.format()" method and the "Formatter" class share the same syntax for format strings (although in the case of "Formatter", subclasses can define their own format string syntax). The syntax is related to that of formatted string literals, but it is less sophisticated and, in particular, does not support arbitrary expressions. Format strings contain “replacement fields” surrounded by curly braces "{}". Anything that is not contained in braces is considered literal text, which is copied unchanged to the output. If you need to include a brace character in the literal text, it can be escaped by doubling: "{{" and "}}". The grammar for a replacement field is as follows: replacement_field ::= "{" [field_name] ["!" conversion] [":" format_spec] "}" field_name ::= arg_name ("." attribute_name | "[" element_index "]")* arg_name ::= [identifier | digit+] attribute_name ::= identifier element_index ::= digit+ | index_string index_string ::= <any source character except "]"> + conversion ::= "r" | "s" | "a" format_spec ::= format-spec:format_spec In less formal terms, the replacement field can start with a *field_name* that specifies the object whose value is to be formatted and inserted into the output instead of the replacement field. The *field_name* is optionally followed by a *conversion* field, which is preceded by an exclamation point "'!'", and a *format_spec*, which is preceded by a colon "':'". These specify a non-default format for the replacement value. See also the Format Specification Mini-Language section. The *field_name* itself begins with an *arg_name* that is either a number or a keyword. If it’s a number, it refers to a positional argument, and if it’s a keyword, it refers to a named keyword argument. An *arg_name* is treated as a number if a call to "str.isdecimal()" on the string would return true. If the numerical arg_names in a format string are 0, 1, 2, … in sequence, they can all be omitted (not just some) and the numbers 0, 1, 2, … will be automatically inserted in that order. Because *arg_name* is not quote- delimited, it is not possible to specify arbitrary dictionary keys (e.g., the strings "'10'" or "':-]'") within a format string. The *arg_name* can be followed by any number of index or attribute expressions. An expression of the form "'.name'" selects the named attribute using "getattr()", while an expression of the form "'[index]'" does an index lookup using "__getitem__()". Changed in version 3.1: The positional argument specifiers can be omitted for "str.format()", so "'{} {}'.format(a, b)" is equivalent to "'{0} {1}'.format(a, b)". Changed in version 3.4: The positional argument specifiers can be omitted for "Formatter". Some simple format string examples: "First, thou shalt count to {0}" # References first positional argument "Bring me a {}" # Implicitly references the first positional argument "From {} to {}" # Same as "From {0} to {1}" "My quest is {name}" # References keyword argument 'name' "Weight in tons {0.weight}" # 'weight' attribute of first positional arg "Units destroyed: {players[0]}" # First element of keyword argument 'players'. The *conversion* field causes a type coercion before formatting. Normally, the job of formatting a value is done by the "__format__()" method of the value itself. However, in some cases it is desirable to force a type to be formatted as a string, overriding its own definition of formatting. By converting the value to a string before calling "__format__()", the normal formatting logic is bypassed. Three conversion flags are currently supported: "'!s'" which calls "str()" on the value, "'!r'" which calls "repr()" and "'!a'" which calls "ascii()". Some examples: "Harold's a clever {0!s}" # Calls str() on the argument first "Bring out the holy {name!r}" # Calls repr() on the argument first "More {!a}" # Calls ascii() on the argument first The *format_spec* field contains a specification of how the value should be presented, including such details as field width, alignment, padding, decimal precision and so on. Each value type can define its own “formatting mini-language” or interpretation of the *format_spec*. Most built-in types support a common formatting mini-language, which is described in the next section. A *format_spec* field can also include nested replacement fields within it. These nested replacement fields may contain a field name, conversion flag and format specification, but deeper nesting is not allowed. The replacement fields within the format_spec are substituted before the *format_spec* string is interpreted. This allows the formatting of a value to be dynamically specified. See the Format examples section for some examples. Format Specification Mini-Language ================================== “Format specifications” are used within replacement fields contained within a format string to define how individual values are presented (see Format String Syntax and f-strings). They can also be passed directly to the built-in "format()" function. Each formattable type may define how the format specification is to be interpreted. Most built-in types implement the following options for format specifications, although some of the formatting options are only supported by the numeric types. A general convention is that an empty format specification produces the same result as if you had called "str()" on the value. A non-empty format specification typically modifies the result. The general form of a *standard format specifier* is: format_spec ::= [[fill]align][sign]["z"]["#"]["0"][width][grouping_option]["." precision][type] fill ::= <any character> align ::= "<" | ">" | "=" | "^" sign ::= "+" | "-" | " " width ::= digit+ grouping_option ::= "_" | "," precision ::= digit+ type ::= "b" | "c" | "d" | "e" | "E" | "f" | "F" | "g" | "G" | "n" | "o" | "s" | "x" | "X" | "%" If a valid *align* value is specified, it can be preceded by a *fill* character that can be any character and defaults to a space if omitted. It is not possible to use a literal curly brace (”"{"” or “"}"”) as the *fill* character in a formatted string literal or when using the "str.format()" method. However, it is possible to insert a curly brace with a nested replacement field. This limitation doesn’t affect the "format()" function. The meaning of the various alignment options is as follows: +-----------+------------------------------------------------------------+ | Option | Meaning | |===========|============================================================| | "'<'" | Forces the field to be left-aligned within the available | | | space (this is the default for most objects). | +-----------+------------------------------------------------------------+ | "'>'" | Forces the field to be right-aligned within the available | | | space (this is the default for numbers). | +-----------+------------------------------------------------------------+ | "'='" | Forces the padding to be placed after the sign (if any) | | | but before the digits. This is used for printing fields | | | in the form ‘+000000120’. This alignment option is only | | | valid for numeric types. It becomes the default for | | | numbers when ‘0’ immediately precedes the field width. | +-----------+------------------------------------------------------------+ | "'^'" | Forces the field to be centered within the available | | | space. | +-----------+------------------------------------------------------------+ Note that unless a minimum field width is defined, the field width will always be the same size as the data to fill it, so that the alignment option has no meaning in this case. The *sign* option is only valid for number types, and can be one of the following: +-----------+------------------------------------------------------------+ | Option | Meaning | |===========|============================================================| | "'+'" | indicates that a sign should be used for both positive as | | | well as negative numbers. | +-----------+------------------------------------------------------------+ | "'-'" | indicates that a sign should be used only for negative | | | numbers (this is the default behavior). | +-----------+------------------------------------------------------------+ | space | indicates that a leading space should be used on positive | | | numbers, and a minus sign on negative numbers. | +-----------+------------------------------------------------------------+ The "'z'" option coerces negative zero floating-point values to positive zero after rounding to the format precision. This option is only valid for floating-point presentation types. Changed in version 3.11: Added the "'z'" option (see also **PEP 682**). The "'#'" option causes the “alternate form” to be used for the conversion. The alternate form is defined differently for different types. This option is only valid for integer, float and complex types. For integers, when binary, octal, or hexadecimal output is used, this option adds the respective prefix "'0b'", "'0o'", "'0x'", or "'0X'" to the output value. For float and complex the alternate form causes the result of the conversion to always contain a decimal- point character, even if no digits follow it. Normally, a decimal- point character appears in the result of these conversions only if a digit follows it. In addition, for "'g'" and "'G'" conversions, trailing zeros are not removed from the result. The "','" option signals the use of a comma for a thousands separator. For a locale aware separator, use the "'n'" integer presentation type instead. Changed in version 3.1: Added the "','" option (see also **PEP 378**). The "'_'" option signals the use of an underscore for a thousands separator for floating-point presentation types and for integer presentation type "'d'". For integer presentation types "'b'", "'o'", "'x'", and "'X'", underscores will be inserted every 4 digits. For other presentation types, specifying this option is an error. Changed in version 3.6: Added the "'_'" option (see also **PEP 515**). *width* is a decimal integer defining the minimum total field width, including any prefixes, separators, and other formatting characters. If not specified, then the field width will be determined by the content. When no explicit alignment is given, preceding the *width* field by a zero ("'0'") character enables sign-aware zero-padding for numeric types. This is equivalent to a *fill* character of "'0'" with an *alignment* type of "'='". Changed in version 3.10: Preceding the *width* field by "'0'" no longer affects the default alignment for strings. The *precision* is a decimal integer indicating how many digits should be displayed after the decimal point for presentation types "'f'" and "'F'", or before and after the decimal point for presentation types "'g'" or "'G'". For string presentation types the field indicates the maximum field size - in other words, how many characters will be used from the field content. The *precision* is not allowed for integer presentation types. Finally, the *type* determines how the data should be presented. The available string presentation types are: +-----------+------------------------------------------------------------+ | Type | Meaning | |===========|============================================================| | "'s'" | String format. This is the default type for strings and | | | may be omitted. | +-----------+------------------------------------------------------------+ | None | The same as "'s'". | +-----------+------------------------------------------------------------+ The available integer presentation types are: +-----------+------------------------------------------------------------+ | Type | Meaning | |===========|============================================================| | "'b'" | Binary format. Outputs the number in base 2. | +-----------+------------------------------------------------------------+ | "'c'" | Character. Converts the integer to the corresponding | | | unicode character before printing. | +-----------+------------------------------------------------------------+ | "'d'" | Decimal Integer. Outputs the number in base 10. | +-----------+------------------------------------------------------------+ | "'o'" | Octal format. Outputs the number in base 8. | +-----------+------------------------------------------------------------+ | "'x'" | Hex format. Outputs the number in base 16, using lower- | | | case letters for the digits above 9. | +-----------+------------------------------------------------------------+ | "'X'" | Hex format. Outputs the number in base 16, using upper- | | | case letters for the digits above 9. In case "'#'" is | | | specified, the prefix "'0x'" will be upper-cased to "'0X'" | | | as well. | +-----------+------------------------------------------------------------+ | "'n'" | Number. This is the same as "'d'", except that it uses the | | | current locale setting to insert the appropriate number | | | separator characters. | +-----------+------------------------------------------------------------+ | None | The same as "'d'". | +-----------+------------------------------------------------------------+ In addition to the above presentation types, integers can be formatted with the floating-point presentation types listed below (except "'n'" and "None"). When doing so, "float()" is used to convert the integer to a floating-point number before formatting. The available presentation types for "float" and "Decimal" values are: +-----------+------------------------------------------------------------+ | Type | Meaning | |===========|============================================================| | "'e'" | Scientific notation. For a given precision "p", formats | | | the number in scientific notation with the letter ‘e’ | | | separating the coefficient from the exponent. The | | | coefficient has one digit before and "p" digits after the | | | decimal point, for a total of "p + 1" significant digits. | | | With no precision given, uses a precision of "6" digits | | | after the decimal point for "float", and shows all | | | coefficient digits for "Decimal". If no digits follow the | | | decimal point, the decimal point is also removed unless | | | the "#" option is used. | +-----------+------------------------------------------------------------+ | "'E'" | Scientific notation. Same as "'e'" except it uses an upper | | | case ‘E’ as the separator character. | +-----------+------------------------------------------------------------+ | "'f'" | Fixed-point notation. For a given precision "p", formats | | | the number as a decimal number with exactly "p" digits | | | following the decimal point. With no precision given, uses | | | a precision of "6" digits after the decimal point for | | | "float", and uses a precision large enough to show all | | | coefficient digits for "Decimal". If no digits follow the | | | decimal point, the decimal point is also removed unless | | | the "#" option is used. | +-----------+------------------------------------------------------------+ | "'F'" | Fixed-point notation. Same as "'f'", but converts "nan" to | | | "NAN" and "inf" to "INF". | +-----------+------------------------------------------------------------+ | "'g'" | General format. For a given precision "p >= 1", this | | | rounds the number to "p" significant digits and then | | | formats the result in either fixed-point format or in | | | scientific notation, depending on its magnitude. A | | | precision of "0" is treated as equivalent to a precision | | | of "1". The precise rules are as follows: suppose that | | | the result formatted with presentation type "'e'" and | | | precision "p-1" would have exponent "exp". Then, if "m <= | | | exp < p", where "m" is -4 for floats and -6 for | | | "Decimals", the number is formatted with presentation type | | | "'f'" and precision "p-1-exp". Otherwise, the number is | | | formatted with presentation type "'e'" and precision | | | "p-1". In both cases insignificant trailing zeros are | | | removed from the significand, and the decimal point is | | | also removed if there are no remaining digits following | | | it, unless the "'#'" option is used. With no precision | | | given, uses a precision of "6" significant digits for | | | "float". For "Decimal", the coefficient of the result is | | | formed from the coefficient digits of the value; | | | scientific notation is used for values smaller than "1e-6" | | | in absolute value and values where the place value of the | | | least significant digit is larger than 1, and fixed-point | | | notation is used otherwise. Positive and negative | | | infinity, positive and negative zero, and nans, are | | | formatted as "inf", "-inf", "0", "-0" and "nan" | | | respectively, regardless of the precision. | +-----------+------------------------------------------------------------+ | "'G'" | General format. Same as "'g'" except switches to "'E'" if | | | the number gets too large. The representations of infinity | | | and NaN are uppercased, too. | +-----------+------------------------------------------------------------+ | "'n'" | Number. This is the same as "'g'", except that it uses the | | | current locale setting to insert the appropriate number | | | separator characters. | +-----------+------------------------------------------------------------+ | "'%'" | Percentage. Multiplies the number by 100 and displays in | | | fixed ("'f'") format, followed by a percent sign. | +-----------+------------------------------------------------------------+ | None | For "float" this is the same as "'g'", except that when | | | fixed-point notation is used to format the result, it | | | always includes at least one digit past the decimal point. | | | The precision used is as large as needed to represent the | | | given value faithfully. For "Decimal", this is the same | | | as either "'g'" or "'G'" depending on the value of | | | "context.capitals" for the current decimal context. The | | | overall effect is to match the output of "str()" as | | | altered by the other format modifiers. | +-----------+------------------------------------------------------------+ Format examples =============== This section contains examples of the "str.format()" syntax and comparison with the old "%"-formatting. In most of the cases the syntax is similar to the old "%"-formatting, with the addition of the "{}" and with ":" used instead of "%". For example, "'%03.2f'" can be translated to "'{:03.2f}'". The new format syntax also supports new and different options, shown in the following examples. Accessing arguments by position: >>> '{0}, {1}, {2}'.format('a', 'b', 'c') 'a, b, c' >>> '{}, {}, {}'.format('a', 'b', 'c') # 3.1+ only 'a, b, c' >>> '{2}, {1}, {0}'.format('a', 'b', 'c') 'c, b, a' >>> '{2}, {1}, {0}'.format(*'abc') # unpacking argument sequence 'c, b, a' >>> '{0}{1}{0}'.format('abra', 'cad') # arguments' indices can be repeated 'abracadabra' Accessing arguments by name: >>> 'Coordinates: {latitude}, {longitude}'.format(latitude='37.24N', longitude='-115.81W') 'Coordinates: 37.24N, -115.81W' >>> coord = {'latitude': '37.24N', 'longitude': '-115.81W'} >>> 'Coordinates: {latitude}, {longitude}'.format(**coord) 'Coordinates: 37.24N, -115.81W' Accessing arguments’ attributes: >>> c = 3-5j >>> ('The complex number {0} is formed from the real part {0.real} ' ... 'and the imaginary part {0.imag}.').format(c) 'The complex number (3-5j) is formed from the real part 3.0 and the imaginary part -5.0.' >>> class Point: ... def __init__(self, x, y): ... self.x, self.y = x, y ... def __str__(self): ... return 'Point({self.x}, {self.y})'.format(self=self) ... >>> str(Point(4, 2)) 'Point(4, 2)' Accessing arguments’ items: >>> coord = (3, 5) >>> 'X: {0[0]}; Y: {0[1]}'.format(coord) 'X: 3; Y: 5' Replacing "%s" and "%r": >>> "repr() shows quotes: {!r}; str() doesn't: {!s}".format('test1', 'test2') "repr() shows quotes: 'test1'; str() doesn't: test2" Aligning the text and specifying a width: >>> '{:<30}'.format('left aligned') 'left aligned ' >>> '{:>30}'.format('right aligned') ' right aligned' >>> '{:^30}'.format('centered') ' centered ' >>> '{:*^30}'.format('centered') # use '*' as a fill char '***********centered***********' Replacing "%+f", "%-f", and "% f" and specifying a sign: >>> '{:+f}; {:+f}'.format(3.14, -3.14) # show it always '+3.140000; -3.140000' >>> '{: f}; {: f}'.format(3.14, -3.14) # show a space for positive numbers ' 3.140000; -3.140000' >>> '{:-f}; {:-f}'.format(3.14, -3.14) # show only the minus -- same as '{:f}; {:f}' '3.140000; -3.140000' Replacing "%x" and "%o" and converting the value to different bases: >>> # format also supports binary numbers >>> "int: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(42) 'int: 42; hex: 2a; oct: 52; bin: 101010' >>> # with 0x, 0o, or 0b as prefix: >>> "int: {0:d}; hex: {0:#x}; oct: {0:#o}; bin: {0:#b}".format(42) 'int: 42; hex: 0x2a; oct: 0o52; bin: 0b101010' Using the comma as a thousands separator: >>> '{:,}'.format(1234567890) '1,234,567,890' Expressing a percentage: >>> points = 19 >>> total = 22 >>> 'Correct answers: {:.2%}'.format(points/total) 'Correct answers: 86.36%' Using type-specific formatting: >>> import datetime >>> d = datetime.datetime(2010, 7, 4, 12, 15, 58) >>> '{:%Y-%m-%d %H:%M:%S}'.format(d) '2010-07-04 12:15:58' Nesting arguments and more complex examples: >>> for align, text in zip('<^>', ['left', 'center', 'right']): ... '{0:{fill}{align}16}'.format(text, fill=align, align=align) ... 'left<<<<<<<<<<<<' '^^^^^center^^^^^' '>>>>>>>>>>>right' >>> >>> octets = [192, 168, 0, 1] >>> '{:02X}{:02X}{:02X}{:02X}'.format(*octets) 'C0A80001' >>> int(_, 16) 3232235521 >>> >>> width = 5 >>> for num in range(5,12): ... for base in 'dXob': ... print('{0:{width}{base}}'.format(num, base=base, width=width), end=' ') ... print() ... 5 5 5 101 6 6 6 110 7 7 7 111 8 8 10 1000 9 9 11 1001 10 A 12 1010 11 B 13 1011 �functionu3 Function definitions ******************** A function definition defines a user-defined function object (see section The standard type hierarchy): funcdef ::= [decorators] "def" funcname [type_params] "(" [parameter_list] ")" ["->" expression] ":" suite decorators ::= decorator+ decorator ::= "@" assignment_expression NEWLINE parameter_list ::= defparameter ("," defparameter)* "," "/" ["," [parameter_list_no_posonly]] | parameter_list_no_posonly parameter_list_no_posonly ::= defparameter ("," defparameter)* ["," [parameter_list_starargs]] | parameter_list_starargs parameter_list_starargs ::= "*" [parameter] ("," defparameter)* ["," ["**" parameter [","]]] | "**" parameter [","] parameter ::= identifier [":" expression] defparameter ::= parameter ["=" expression] funcname ::= identifier A function definition is an executable statement. Its execution binds the function name in the current local namespace to a function object (a wrapper around the executable code for the function). This function object contains a reference to the current global namespace as the global namespace to be used when the function is called. The function definition does not execute the function body; this gets executed only when the function is called. [4] A function definition may be wrapped by one or more *decorator* expressions. Decorator expressions are evaluated when the function is defined, in the scope that contains the function definition. The result must be a callable, which is invoked with the function object as the only argument. The returned value is bound to the function name instead of the function object. Multiple decorators are applied in nested fashion. For example, the following code @f1(arg) @f2 def func(): pass is roughly equivalent to def func(): pass func = f1(arg)(f2(func)) except that the original function is not temporarily bound to the name "func". Changed in version 3.9: Functions may be decorated with any valid "assignment_expression". Previously, the grammar was much more restrictive; see **PEP 614** for details. A list of type parameters may be given in square brackets between the function’s name and the opening parenthesis for its parameter list. This indicates to static type checkers that the function is generic. At runtime, the type parameters can be retrieved from the function’s "__type_params__" attribute. See Generic functions for more. Changed in version 3.12: Type parameter lists are new in Python 3.12. When one or more *parameters* have the form *parameter* "=" *expression*, the function is said to have “default parameter values.” For a parameter with a default value, the corresponding *argument* may be omitted from a call, in which case the parameter’s default value is substituted. If a parameter has a default value, all following parameters up until the “"*"” must also have a default value — this is a syntactic restriction that is not expressed by the grammar. **Default parameter values are evaluated from left to right when the function definition is executed.** This means that the expression is evaluated once, when the function is defined, and that the same “pre- computed” value is used for each call. This is especially important to understand when a default parameter value is a mutable object, such as a list or a dictionary: if the function modifies the object (e.g. by appending an item to a list), the default parameter value is in effect modified. This is generally not what was intended. A way around this is to use "None" as the default, and explicitly test for it in the body of the function, e.g.: def whats_on_the_telly(penguin=None): if penguin is None: penguin = [] penguin.append("property of the zoo") return penguin Function call semantics are described in more detail in section Calls. A function call always assigns values to all parameters mentioned in the parameter list, either from positional arguments, from keyword arguments, or from default values. If the form “"*identifier"” is present, it is initialized to a tuple receiving any excess positional parameters, defaulting to the empty tuple. If the form “"**identifier"” is present, it is initialized to a new ordered mapping receiving any excess keyword arguments, defaulting to a new empty mapping of the same type. Parameters after “"*"” or “"*identifier"” are keyword-only parameters and may only be passed by keyword arguments. Parameters before “"/"” are positional-only parameters and may only be passed by positional arguments. Changed in version 3.8: The "/" function parameter syntax may be used to indicate positional-only parameters. See **PEP 570** for details. Parameters may have an *annotation* of the form “": expression"” following the parameter name. Any parameter may have an annotation, even those of the form "*identifier" or "**identifier". Functions may have “return” annotation of the form “"-> expression"” after the parameter list. These annotations can be any valid Python expression. The presence of annotations does not change the semantics of a function. The annotation values are available as values of a dictionary keyed by the parameters’ names in the "__annotations__" attribute of the function object. If the "annotations" import from "__future__" is used, annotations are preserved as strings at runtime which enables postponed evaluation. Otherwise, they are evaluated when the function definition is executed. In this case annotations may be evaluated in a different order than they appear in the source code. It is also possible to create anonymous functions (functions not bound to a name), for immediate use in expressions. This uses lambda expressions, described in section Lambdas. Note that the lambda expression is merely a shorthand for a simplified function definition; a function defined in a “"def"” statement can be passed around or assigned to another name just like a function defined by a lambda expression. The “"def"” form is actually more powerful since it allows the execution of multiple statements and annotations. **Programmer’s note:** Functions are first-class objects. A “"def"” statement executed inside a function definition defines a local function that can be returned or passed around. Free variables used in the nested function can access the local variables of the function containing the def. See section Naming and binding for details. See also: **PEP 3107** - Function Annotations The original specification for function annotations. **PEP 484** - Type Hints Definition of a standard meaning for annotations: type hints. **PEP 526** - Syntax for Variable Annotations Ability to type hint variable declarations, including class variables and instance variables. **PEP 563** - Postponed Evaluation of Annotations Support for forward references within annotations by preserving annotations in a string form at runtime instead of eager evaluation. **PEP 318** - Decorators for Functions and Methods Function and method decorators were introduced. Class decorators were introduced in **PEP 3129**. �globalu� The "global" statement ********************** global_stmt ::= "global" identifier ("," identifier)* The "global" statement is a declaration which holds for the entire current code block. It means that the listed identifiers are to be interpreted as globals. It would be impossible to assign to a global variable without "global", although free variables may refer to globals without being declared global. Names listed in a "global" statement must not be used in the same code block textually preceding that "global" statement. Names listed in a "global" statement must not be defined as formal parameters, or as targets in "with" statements or "except" clauses, or in a "for" target list, "class" definition, function definition, "import" statement, or variable annotation. **CPython implementation detail:** The current implementation does not enforce some of these restrictions, but programs should not abuse this freedom, as future implementations may enforce them or silently change the meaning of the program. **Programmer’s note:** "global" is a directive to the parser. It applies only to code parsed at the same time as the "global" statement. In particular, a "global" statement contained in a string or code object supplied to the built-in "exec()" function does not affect the code block *containing* the function call, and code contained in such a string is unaffected by "global" statements in the code containing the function call. The same applies to the "eval()" and "compile()" functions. z id-classesu� Reserved classes of identifiers ******************************* Certain classes of identifiers (besides keywords) have special meanings. These classes are identified by the patterns of leading and trailing underscore characters: "_*" Not imported by "from module import *". "_" In a "case" pattern within a "match" statement, "_" is a soft keyword that denotes a wildcard. Separately, the interactive interpreter makes the result of the last evaluation available in the variable "_". (It is stored in the "builtins" module, alongside built-in functions like "print".) Elsewhere, "_" is a regular identifier. It is often used to name “special” items, but it is not special to Python itself. Note: The name "_" is often used in conjunction with internationalization; refer to the documentation for the "gettext" module for more information on this convention.It is also commonly used for unused variables. "__*__" System-defined names, informally known as “dunder” names. These names are defined by the interpreter and its implementation (including the standard library). Current system names are discussed in the Special method names section and elsewhere. More will likely be defined in future versions of Python. *Any* use of "__*__" names, in any context, that does not follow explicitly documented use, is subject to breakage without warning. "__*" Class-private names. Names in this category, when used within the context of a class definition, are re-written to use a mangled form to help avoid name clashes between “private” attributes of base and derived classes. See section Identifiers (Names). �identifiersu� Identifiers and keywords ************************ Identifiers (also referred to as *names*) are described by the following lexical definitions. The syntax of identifiers in Python is based on the Unicode standard annex UAX-31, with elaboration and changes as defined below; see also **PEP 3131** for further details. Within the ASCII range (U+0001..U+007F), the valid characters for identifiers are the same as in Python 2.x: the uppercase and lowercase letters "A" through "Z", the underscore "_" and, except for the first character, the digits "0" through "9". Python 3.0 introduces additional characters from outside the ASCII range (see **PEP 3131**). For these characters, the classification uses the version of the Unicode Character Database as included in the "unicodedata" module. Identifiers are unlimited in length. Case is significant. identifier ::= xid_start xid_continue* id_start ::= <all characters in general categories Lu, Ll, Lt, Lm, Lo, Nl, the underscore, and characters with the Other_ID_Start property> id_continue ::= <all characters in id_start, plus characters in the categories Mn, Mc, Nd, Pc and others with the Other_ID_Continue property> xid_start ::= <all characters in id_start whose NFKC normalization is in "id_start xid_continue*"> xid_continue ::= <all characters in id_continue whose NFKC normalization is in "id_continue*"> The Unicode category codes mentioned above stand for: * *Lu* - uppercase letters * *Ll* - lowercase letters * *Lt* - titlecase letters * *Lm* - modifier letters * *Lo* - other letters * *Nl* - letter numbers * *Mn* - nonspacing marks * *Mc* - spacing combining marks * *Nd* - decimal numbers * *Pc* - connector punctuations * *Other_ID_Start* - explicit list of characters in PropList.txt to support backwards compatibility * *Other_ID_Continue* - likewise All identifiers are converted into the normal form NFKC while parsing; comparison of identifiers is based on NFKC. A non-normative HTML file listing all valid identifier characters for Unicode 15.0.0 can be found at https://www.unicode.org/Public/15.0.0/ucd/DerivedCoreProperties.txt Keywords ======== The following identifiers are used as reserved words, or *keywords* of the language, and cannot be used as ordinary identifiers. They must be spelled exactly as written here: False await else import pass None break except in raise True class finally is return and continue for lambda try as def from nonlocal while assert del global not with async elif if or yield Soft Keywords ============= Added in version 3.10. Some identifiers are only reserved under specific contexts. These are known as *soft keywords*. The identifiers "match", "case", "type" and "_" can syntactically act as keywords in certain contexts, but this distinction is done at the parser level, not when tokenizing. As soft keywords, their use in the grammar is possible while still preserving compatibility with existing code that uses these names as identifier names. "match", "case", and "_" are used in the "match" statement. "type" is used in the "type" statement. Changed in version 3.12: "type" is now a soft keyword. Reserved classes of identifiers =============================== Certain classes of identifiers (besides keywords) have special meanings. These classes are identified by the patterns of leading and trailing underscore characters: "_*" Not imported by "from module import *". "_" In a "case" pattern within a "match" statement, "_" is a soft keyword that denotes a wildcard. Separately, the interactive interpreter makes the result of the last evaluation available in the variable "_". (It is stored in the "builtins" module, alongside built-in functions like "print".) Elsewhere, "_" is a regular identifier. It is often used to name “special” items, but it is not special to Python itself. Note: The name "_" is often used in conjunction with internationalization; refer to the documentation for the "gettext" module for more information on this convention.It is also commonly used for unused variables. "__*__" System-defined names, informally known as “dunder” names. These names are defined by the interpreter and its implementation (including the standard library). Current system names are discussed in the Special method names section and elsewhere. More will likely be defined in future versions of Python. *Any* use of "__*__" names, in any context, that does not follow explicitly documented use, is subject to breakage without warning. "__*" Class-private names. Names in this category, when used within the context of a class definition, are re-written to use a mangled form to help avoid name clashes between “private” attributes of base and derived classes. See section Identifiers (Names). �if� imaginarya5 Imaginary literals ****************** Imaginary literals are described by the following lexical definitions: imagnumber ::= (floatnumber | digitpart) ("j" | "J") An imaginary literal yields a complex number with a real part of 0.0. Complex numbers are represented as a pair of floating-point numbers and have the same restrictions on their range. To create a complex number with a nonzero real part, add a floating-point number to it, e.g., "(3+4j)". Some examples of imaginary literals: 3.14j 10.j 10j .001j 1e100j 3.14e-10j 3.14_15_93j �importu�"