You are on page 1of 66

Intermediate Python Programming

The Insider Guide to


Intermediate Python
Programming Concepts

By: Richard Ozer


Copyright 2017 Richard Ozer - All rights reserved.
The contents of this book may not be reproduced, duplicated or transmitted
without direct written permission from the author.
Under no circumstances will any legal responsibility or blame be held against
the publisher for any reparation, damages, or monetary loss due to the
information herein, either directly or indirectly.
Legal Notice:
You cannot amend, distribute, sell, use, quote or paraphrase any part or the
content within this book without the consent of the author.
Disclaimer Notice:
Please note the information contained within this document is for educational
and entertainment purposes only. No warranties of any kind are expressed or
implied. Readers acknowledge that the author is not engaging in the
rendering of legal, financial, medical or professional advice. Please consult a
licensed professional before attempting any techniques outlined in this book.
By reading this document, the reader agrees that under no circumstances are
is the author responsible for any losses, direct or indirect, which are incurred
as a result of the use of information contained within this document,
including, but not limited to, errors, omissions, or inaccuracies.
Contents
Introduction
Chapter 1: Object-Oriented Programming (OOP)
Chapter 2: General Objects and Methods
Chapter 2a: Descriptors
Chapter 3: Functions in Python
Chapter 4: Generators and Iterators
Chapter 5: Lambda, Map, Filter, & Reduce
Conclusion
Introduction

It was in 1989, December to be precise, that a man named Guido van Rossum
began working on a computer language that he named Python. He had
previously worked with the team that devised the ABC language that was
part of the Amoeba operating system from the 1980s. Much as he found the
ABC language okay, there were a few features that the language was lacking
in and this caused him no end of frustration. What he wanted was a computer
programming language that was high-level, something that would speed up
how quickly Amoeba project utilities were developed and it certainly wasnt
ABC. However, the ABC language was set to play an influential and
significant part in the development of the new language because Guido
borrowed the bits of ABC that he did like and then teamed them up with
features that were missing from the language.
The very first edition of python was published in February 1991. It was
object oriented; it had a system of modules; exception handling was included,
along with functions and all the core data types. Python v1.0 was officially
released in January of 1994 and included programming concepts like map,
lambda, filter and reduce.
Guido Van Rossum released v1.2 while he was still working on the Amoeba
project before he moved on to the Corporation for National Research
Initiatives in Virginia. From there, he continued to work on Python using
indirect funding from DARPRA to release a few more versions of the
programming language.
By the time Python 1.4 rolled around, several new features had been
included, such as the keyword arguments inspired by Modula 3 and there was
also support built in for complex numbers, as well as mangling, a basic form
of hiding data. On 31 December 1997, v 1.5 was released and September 5,
2000 saw v1.6. This was followed very closely by v2.0, in October 2000 and
this latest version included even more features, like list comprehensions. This
concept came from two other programming languages, Haskell and SETL. A
garbage collection system was also introduced, bringing in a feature that
could collect reference cycles.
The first big update to Python came with v2.2. All the in-built types in
Python and all user-defined classes that were written in the Python language
were unified into a single hierarchy. This unification is what gave Python its
model of pure and consistent object orientation. The update to the Python
class system brought more new features to provide a better experience of
programming for all users, including:
The ability to be able to subclass any of the built-in data types
The introduction of class and static methods
The introduction of get() and set() methods for defining properties
Updates to the metaclasses, to the __new__() method and the super()
method along with an update to MRO algorithms
Python remained like this until December 2, 2008 when the next major
version was released. This was Python 3 and it was released to fix a few basic
design flaws evident in Python 2. These fixes couldnt be implemented into
Python while backward compatibility was maintained in 2 series, only with
the introduction of a major new release.
Python 2 vs Python 3
Python 3 has caused the most disruption to the ecosystem, bringing with it
some major changes. These changes include:
Print is now classed as a function
Some of the better-known APIs, like dict.keys(), range, and
dict.values() will now return iterators and views instead of a list, thus
providing an improvement to efficiency when these APIs get used
The rules for comparison ordering are much simpler. For example, you
can no longer sort a heterogeneous list because all of the list elements
have got to be comparable to one another
Integer types have been reduced to one, for example, an int.long is now
classed as an int
When carryout out division of a pair of integers, the result will now be
a float and not an integer. If you want to return an integer as the result,
you will need to use //.
All Python text is now in Unicode but binary data is used to represent
encoded Unicode text. If you attempt to mix data and text you will raise
an exception and this will break all backwards compatibility with
Python 2
New syntax was introduced, including the nonlocal statement, function
annotations, set literals, extended inerrable unpacking, and dictionary
comprehensions, to name just a few
Other syntax was updated, including metaclass specification, exception
handling and list comprehensions.
You can find all the details about the changes from Python 2 to Python 3 on
the official Python website but, for the purposes of this book, we are going to
be using Python 3.
That completes your introduction to the Python programming language. We
are now going to delve into some of the intermediate concepts of Python and
I have provided you with plenty of working examples so that you can see
exactly how these concepts work.
Welcome to the world of intermediate Python programming!
Chapter 1: Object-Oriented Programming (OOP)

Python is an object-oriented programming language but what does this mean?


Object-oriented Programming, better known as OOP, is a paradigm that is
based firmly on the concept that everything is an object. These objects may
have fields that contain data and these are called attributes. They may have
code that comes in the form of a procedure, otherwise known as methods.
One of the features of an object-oriented programming language is that the
procedures of an object can access the data fields and sometimes modify
them, of the objects that the procedure is associated with if it helps, all
objects have the notion of self or this. On object-oriented programming,
we can design a computer program by building it out of a series of objects
that all talk to and interact with each other. To be fair, OOP languages are
quite diverse but the most popular are those that are class-based. This means
that each object is an instance of a class and this will normally determine the
type of the object.
Most of the more popular programming languages, like Python, C++, Java,
etc., are all multi-paradigm languages that provide support for object-oriented
programming to some degree. This is normally in conjunction with
something called Imperative Procedural Programming. The most significant
of all the OOP languages are Ruby, Perl, C++, Java, C#, Python PHP, Object
Pascal, Dart, Swift, Objective-C, Scala, Smalltalk and Common Lisp.
Features of OOP
We know that object-oriented programming relies on objects but it is
important to note that not all the structures and techniques associated with
objects are directly supported in all programming languages that claim to
provide support for OOP. The following features are common to those
languages, especially Python, that are considered to be object and class
oriented:
1. Shared features with predecessor languages that are non-OOP
Object-oriented languages tend to share some low-level features with
preceding high-level OOP languages. Those basic tools used for program
construction include:
Variables able to store formatted information in a few data types that
are built-in to the language such as alphanumerical characters and
integers. This could include structures, such as lists, strings and hash
tables that are either a result of using memory pointers to combine
variables or are built in.
Procedures these may also be called methods, functions, routines,
and subroutines. They take an input and will generate an output and can
be used for data manipulation. The more modern languages will include
more structured concepts such as conditionals and loops you will find
both of these in Python.
The inclusion of modular programming support gives programmers the
ability to group their procedures into modules and files, purely for the
purpose of organization Each module is name-spaced and this ensures that
the code from one module cannot be confused with a variable or procedure
name from another module.
2. Classes and Objects
Languages that provide support for OOP tend to use something called
inheritance. This helps with re-using code and with extensibility and is done
through the use of classes or prototypes. Those languages that use the class
system use two main concepts:
Classes these are definitions of the format of the data and the
procedures that are available for any given class or type of object. They
may have procedures and data in them, commonly known as class
methods, i.e. the class will contain the member functions and the data
members.
Objects these are instances of a class
Objects will, on occasion, correspond to something that may be found in the
real world. For example, take a graphics program. It could have objects in it
like square circle or menu. If you had a system for online shopping,
you could have objects named customer, shopping cart or product.
Sometimes, an object will represent an entity that is somewhat more abstract
such as an open file or an object that will provide a service like a translation
of the measurements from imperial to metric, for example.
Each object is an instance of a specific class. For example, if you had an
object that had a name field of Marie, it could be an instance of a class
called Employee. Procedures in OOP are often called methods, while
variables are called fields, properties, attributes or members. All of this leads
on to these terms:
Class variable these belong to the specific class and there can only
be one copy of each
Instance variable otherwise known as attributes, these are the data
the goes with each object. Each object will have a copy of each
attribute
Member variable these refer to the class variables and the instance
variables that have been defined by a specific class.
Class method these belong to the class and may only access the
variables in that class and the inputs that come from the procedure call.
Instance method these belong to each individual object and can
access the instance variables only for the object they have been called
on. They may also access the class variables and the inputs
Objects are accessed in much the same way as a variable with an internal
structure that is complex. In many OOP languages, an object is a pointer,
providing a reference to an instance of the object in memory within a stack or
a heap. Objects provide us with a layer of abstraction and this is used to keep
the internal and external code separated. To use an object, the external code
calls a certain instance method using specific input parameters. It can also
read instance variables and or it can write to instance variables.
To create an object, a specific type of method is called in a class called the
constructor. Programs can create multiple instances of one class on the fly, or
as it is running, and each instance will operate independently of the others.
This is the easiest and best way to use one or more procedures on different
data sets.
3. Dynamic Dispatch and Message Passing
It is not the external code that has the responsibility of selecting the
procedural code that is to be executed from a method call; that falls to the
object. This is done by the object looking at the method in a table that is
associated with the object at run time. This is called dynamic dispatch and it
is used to determine an object from a module or an abstract data type. These
have a static implementation of all the operations for all the instances. If there
is a chance that multiple methods may be run for any given name, we call it
multiple dispatch. Method calls are also called message passing and the
message, consisting of the method name and any input parameters are passed
over to the object to be dispatched.
4. Encapsulation
Encapsulation is one of the object-oriented concepts that bind data, and the
functions that are used to manipulate the data and keep them both secure
from misuse and interference. It was encapsulation that led to the concept of
data hiding.
Encapsulation or data hiding can be described as a class that doesnt let
calling code to have any access to an internal object and will only allow
access through a method. Some programming languages allow classes to
explicitly enforce access restrictions for example by using the private
keyword to denote internal data or by using the public keyword to designate a
method that is intended for external code, outside the class.
Methods can also be given a designation of public or private, or intermediate
levels, like protected this provides access from the class and subclasses but
will not allow access from objects of another class. In Python, this concept is
enforced by convention, for example, private methods could have names that
begin with underscores. Encapsulation stops the external code from getting
involved with the internals of an object and this helps with code refactoring,
such as letting a class author change the way objects of the class internally
represent their data without making any changes to the external code only
as long as the public method calls work in the same way. Encapsulation also
encourages us to put all of the code that goes with a specific data set within
the same class, thus organizing it in such a way that other programmers can
read and comprehend it easily.
5. Composition and Inheritance
An object may contain one or more objects inside the instance variables. We
call this object composition, for example, an object found in the class called
Employee might also contain an object from the class called composition, as
well as its own instance variables, such as first_name or position. We use
object composition to represent relationships that fall under has a. i.e. each
employee has a name so every object in Employee has somewhere to store a
Name object.
If an OOP language supports classes, in most cases it will also support
inheritance. This lets your classes be arranged in such a way that is-a-type-
of relationships are represented. For example, the class called Employee
could inherit from the class called Person. All of the methods and the data
that is in the class called Parent may also show up in the child class with
identical names. For example, the class called Person could define variables
such as first_name and last_name with a method called
make_full_name.
These will also be in the class called Employee, and might then add in
variables called position or salary. Using the inheritance technique, we
can easily reuse procedures and data definitions as well as being able to
mirror real-world relationships. Instead of using programming subroutines or
database tables, developers can use objects that they user is likely to be more
familiar with.
A subclass can override any method that has been defined by a super class.
Some languages allow multiple inheritance although this can cause
complications when it comes to resolving overrides. Some languages will
also provide support for mixins although where languages have multiple
inheritance, mixins are nothing more than classes that do not have any
representation of an is-a-type-of relationship. We tend to use mixins to add
one method to several classes, for example, the class called
UnicodeConversionMixin could provide a method called unicode_to_ascii()
when it is included in a class called FileReader and a class called
WebPageScraper, neither of which have a common parent class. You also
cannot instantiate an abstract class into an object abstract classes are only
used to inherit into a concrete class which may be instantiated.
composition over inheritance advocates the implementation of a has-to
relationship by using composition and not inheritance. For example, rather
than inheritance from the class called Person, the class called Employee could
provide each Employee object with an internal object called Person. It would
be able to hide from the external code even if Person contains multiple public
methods or attributes.
6. Open Recursion
Some languages support open recursion and this is where an object method
calls another method on one object, including itself. This is done using a
special variable or a keyword called self or this. These variables are late-
bound which means that they will allow a method that has been defined in a
class to invoke a method that is later defined in a subclass.
Chapter 2: General Objects and Methods

Thanks to http://intermediatepythonista.com for the code examples


Everything in Python is an object, the basis of the object-oriented
programming. Each class provides the tools for different kinds of objects to
be created. In this chapter, we are going to forget about the class basics and
the concepts of object-oriented programming because you already know
about these. Instead, we are going to focus our attention on the topics that
give you a better understanding of how OOP actually works in Python. We
will be working with newer styles of classes those Python classes that can
inherit from an object super class.
Defining a Class
To define a class, we must use the class statement to define the set of
variables, attributes, and methods that are all associated with and also shared
by a group of instances, like a class. Below, you can see a simple definition
of the class:
class Account(object):
num_accounts = 0

def __init__(self, name, balance):


self.name = name
self.balance = balance
Account.num_accounts += 1

def del_account(self):
Account.num_accounts -= 1

def deposit(self, amt):


self.balance = self.balance + amt
def withdraw(self, amt):
self.balance = self.balance - amt

def inquiry(self):
return self.balance
The following objects are introduced by a class definition:
Class objects
Instance objects
Method objects
Class Objects
When a program is executed and it comes up against a class definition, a new
namespace gets created this will be the namespace that every binding of a
class variable and a method definition goes to. Be aware that this namespace
wont create any new local scope that class methods can use. Because of this,
there is a need for names that are fully qualified when you access a variable
in a method. For example, lets say you have a class called Accounts with a
variable called num_of_acounts. Any method that attempts to access this
variable can only do so by using the fully qualified name, which is
Account.num_of_accounts. If this name is not used in the __init__() method,
the result will be an error, as displayed below:
class Account(object):
num_accounts = 0

def __init__(self, name, balance):


self.name = name
self.balance = balance
num_accounts += 1

def del_account(self):
Account.num_accounts -= 1
def deposit(self, amt):
self.balance = self.balance + amt

def withdraw(self, amt):


self.balance = self.balance - amt

def inquiry(self):
return self.balance

>>> acct = Account('obi', 10)


Traceback (most recent call last):
File "python", line 1, in <module>
File "python", line 9, in __init__
UnboundLocalError: local variable 'num_accounts' referenced before assignment
Class objects are created when a class definition has finished executing. The
scope that was in use before the class definition was entered into will be
reinstated and the class object will be bound to the class name that was
provided in the class definition header.
Let me just take a small diversion. One question is commonly asked if a
class that is created is an object, what is a class of a class object? We know
that the philosophy of Python is, everything is an object, the class object
actually does have a class that it gets created from and this is the typeclass in
the new style class of Python.
>>> type(Account)
<class 'type'>
So, just to throw a bit more confusion into the mix, the type of a type, in this
case, the Account type, is a type. Type classes are metaclasses and these are
used for creating another class. More about that later, just to clear up any
confusion.
Class objects provide support for reference and instantiation of attributes.
These are referenced through the use of the dot syntax of object dot attribute
name (nameofobject.nameofattribute). All valid names of attributes are the
variables names and the method names that are in the namespace of the class
when the object gets created. To make that a little clearer:
>>> Account.num_accounts
>>> 0
>>> Account.deposit
>>> <unbound method Account.deposit>
To instantiate a class, we use function notation. Instantiations involve calling
the class object in the same way you would a normal function but without
parameters, as per the following example:
>>> Account()
After a class object is instantiated, the result returned is an instance object. If
__init__ is defined in the class it will be called with the instance object as the
initial argument. What this will do is carry out any initialization that has been
defined by the user, for example, initializing the value of an instance variable.
Going back to the Account class, the name and the balance are set and the
number of the instance objects will be incremented by one.
Instance Objects
If you see a class object as a cookie cutter then you can see the instance
object are the cookies. These are the results that you get when you instantiate
a class object. Attribute, method and data objects, references are the only
valid operations that can be carried out on an instance object.
Method Objects
Method objects are much like the function objects. For example, if x were an
instance of the class called Account then x.deposit would be an example of
one of the method objects. A method definition has one additional argument
and its called the self argument. This argument references a class instance.
So why do we need to pass this instance to a method as an argument? That is
best explained with a method call:
>>> x = Account()
>>> x.inquiry()
10
So, what happens when we call an instance method? Did you notice that we
called x.enquiry() method without using an argument, despite the fact that the
inquiry() method definition requires the self argument? What happened
there? Where did that argument go?
Methods have a special way of working the object that we call the method
on gets passed as the very first function argument. In the example above,
when we called x.enquiry(), it was the same as calling Account.f(x).
Generally, when you call a method that has a list of so many arguments, it is
the same as calling the function that corresponds to it with a list of arguments
that are created by putting the object in front of the initial argument.
According to the official Python tutorial:
When we reference an instance attribute that is not a data attribute, the class
for that instance attribute will be searched. Where the name indicates a valid
attribute that is also a function object, the instance and function objects get
packed into an abstract object, creating the method object. When we call that
method object using an argument list, we get a new argument list from the
previous argument list and the instance objects; this new list is used to call
the function object.
This will apply to every instance method object and that includes the
__init__() method. The self argument is not classed as a reserved keyword
and you can use any valid name for the argument in this next example of the
Account class definition:
class Account(object):
num_accounts = 0

def __init__(obj, name, balance):


obj.name = name
obj.balance = balance
Account.num_accounts += 1

def del_account(obj):
Account.num_accounts -= 1

def deposit(obj, amt):


obj.balance = obj.balance + amt

def withdraw(obj, amt):


obj.balance = obj.balance - amt

def inquiry(obj):
return obj.balance

>>> Account.num_accounts
>>> 0
>>> x = Account('obi', 0)
>>> x.deposit(10)
>>> Account.inquiry(x)
>>> 10
Class and Static Methods
Every method that is defined in a class will operate, by default on instances.
However, you may use decorators to define a class or static method and we
do this by decorating the method with the correct @classmethod or
@staticmethod decorator.
Static Methods
A static method is a normal function that resides in a class namespace. When
we reference a static method from a class, we are showing that a function
type gets returned and not an unboundmethod type, as in this example:
class Account(object):
num_accounts = 0
def __init__(self, name, balance):
self.name = name
self.balance = balance
Account.num_accounts += 1

def del_account(self):
Account.num_accounts -= 1

def deposit(self, amt):


self.balance = self.balance + amt

def withdraw(self, amt):


self.balance = self.balance - amt

def inquiry(self):
return "Name={}, balance={}".format(self.name, self.balance)

@staticmethod
def type():
return "Current Account"

>>> Account.deposit
<unbound method Account.deposit>
>>> Account.type
<function type at 0x106893668>
When you define a static method, you use the @staticmethod decorator and
these arguments dont need the self argument. Static methods allow for better
organization as all the code that is related to a specific class are put into that
class and we can use a subclass to override it if necessary.
Class Methods
As the name implies, a class method will operate on a class and not on an
instance. We use the @classmethod decorator on the class and not the
instance that is passed to the method as the first argument:
import json

class Account(object):
num_accounts = 0

def __init__(self, name, balance):


self.name = name
self.balance = balance
Account.num_accounts += 1

def del_account(self):
Account.num_accounts -= 1

def deposit(self, amt):


self.balance = self.balance + amt

def withdraw(self, amt):


self.balance = self.balance - amt

def inquiry(self):
return "Name={}, balance={}".format(self.name, self.balance)

@classmethod
def from_json(cls, params_json):
params = json.loads(params_json)
return cls(params.get("name"), params.get("balance"))

@staticmethod
def type():
return "Current Account"
A good example of using a class method is to see it as a kind of factory for
the creation of objects. If you imagine that all the data that goes in the
Account class will be in different formats, like strings, json, tuples etc. We
cant define an untold number of __init__ methods because Python classes
are only allowed one __init__ method so this is where class methods step into
the breach to save the day.
In the example above of the Account class definition, we wanted to use a json
string object to initialize an account. We defined a class factor method called
from_json this will take a json string object and extract the parameters; it
will then create the account object using those parameters. Another class
method example would be dict.fromkeys() method, a method that is used for
the creation of dict objects form a sequence of keys and their values.
Special Methods
On occasion, you may have user-defined classes that you want to customize,
maybe to change how the class object is created and then initialized or
perhaps you want to provide certain operations with polymorphic behavior.
Polymorphic behavior allows a user-defined class to define its own
implementation for certain operations and, to help, Python has some special
methods. These will usually be of the __*__ format where the * is
referencing the name of a method. __new__ and __init__ are examples of
these methods for customizing the way objects are created and initialized and,
for the emulation of a built-in type you would use __get__, __getitem__,
__sub__ and __add__. For the customization of access to attributes, you
would use __getattr__ or __getattribute__ for example. There are many more
special methods that you could use and we are going to look at the most
important ones below.
Object Creation Special Methods
To create a new instance of a class, we need to use the __new__ method to
create the instance and then the __init__ method to initialize it. Most of you
should already be familiar with how to define an __init__ method but the
__new__ method is not usually defined by a user for each class you can do
it though if you want to customize how class instances are created.
Attribute Access Special Methods
We can also customize the attribute access for a class instance and we do that
through the implementation of these methods:
class Account(object):
num_accounts = 0

def __init__(self, name, balance):


self.name = name
self.balance = balance
Account.num_accounts += 1

def del_account(self):
Account.num_accounts -= 1

def __getattr__(self, name):


return "But hold on! Where is the attribute called {}".format(name)

def deposit(self, amt):


self.balance = self.balance + amt

def withdraw(self, amt):


self.balance = self.balance - amt

def inquiry(self):
return "Name={}, balance={}".format(self.name, self.balance)
@classmethod
def from_dict(cls, params):
params_dict = json.loads(params)
return cls(params_dict.get("name"), params_dict.get("balance"))

@staticmethod
def type():
return "Current Account"

x = Account('obi', 0)
The method called __getattr__(self, name) will only be called when we
reference an attribute that is not an instance attribute and is not in the object
class tree. The method should return a value for that attribute OR it should
raise an AttributeError exception. Lets say that x is an Account class
instance and it is attempting to gain access to an attribute that doesnt exist
that will result in the method being called:
>>> acct = Account("obi", 10)
>>> acct.number
But hold on! Where is the attribute called number?
If __getattr__ is referencing an instance attribute that doesnt exist you may
end up with an infinite loop because __getattr__ is going to be continuously
called without any end to it.
The method called _setattr__(self, name, value)__ is called whenever we
attempt to assign an attribute. __setattr__ should put the value that we are
trying to assign into the instance attribute dictionary whereas
self.name=value will result in a recursive call and an infinite loop.
Whenever we call del obj, __delattr__(self, name)__ will be called
And lastly, when we want to implement an attribute access for a class
instance, we call __getattr__(self, name)__.
Type Emulation Special Methods
Python has a special syntax that is to be used with specific types. For
example, we can access elements that are in tuples and lists with the index
notation[]; we can use the + operator to add a numeric value, and so on. We
can also create classes that use the special syntax and we do that through the
implementation of special methods that will be called by the interpreter
whenever it comes across that syntax. The example below shows you how
this works to emulate a list in Python:
class CustomList(object):

def __init__(self, container=None):


# this class is a wrapper that goes around another list to
# show the special methods
if container is None:
self.container = []
else:
self.container = container

def __len__(self):
# this is called when a user calls len(CustomList instance)
return len(self.container)

def __getitem__(self, index):


# this is called when a user indexes using square brackets
return self.container[index]

def __setitem__(self, index, value):


# this is called when a user assigns an index
if index <= len(self.container):
self.container[index] = value
else:
raise IndexError()

def __contains__(self, value):


# this is called when the user utilizes the 'in' keyword
return value in self.container

def append(self, value):


self.container.append(value)

def __repr__(self):
return str(self.container)

def __add__(self, otherList):


# this is to give support for when the + operator is used
return CustomList(self.container + otherList.container)
So, in this example, CustomList is a wrapper that foes around a list. Purely
for illustration, we have implemented a few custom methods:
__len__(self) : we call this when len() us called on a CustomList
instance, as in the next example:
>>> myList = CustomList()
>>> myList.append(1)
>>> myList.append(2)
>>> myList.append(3)
>>> myList.append(4)
>>> len(myList)
4
__getitem__(self, value): this is used to support the use of square
brackets for indexing on CustomList instances, as in the example
below:
>>> myList = CustomList()
>>> myList.append(1)
>>> myList.append(2)
>>> myList.append(3)
>>> myList.append(4)
>>> myList[3]
4

__setitem__(self, key, value): we call this when we assign value to


self[key] on a CustomList class instance, as you can see below:
>>> myList = CustomList()
>>> myList.append(1)
>>> myList.append(2)
>>> myList.append(3)
>>> myList.append(4)
>>> myList[3] = 100
4
>>> myList[3]
100
__contains__(self, key): we call this to implement the membership test
operators and, if the item exists in self, the return should be true; if not,
it will be false:
>>> myList = CustomList()
>>> myList.append(1)
>>> myList.append(2)
>>> myList.append(3)
>>> myList.append(4)
>>> 4 in myList
True
__repr__(self): we call this when print is called using self as one of the
arguments and it is to compute selfs object representation:
>>> myList = CustomList()
>>> myList.append(1)
>>> myList.append(2)
>>> myList.append(3)
>>> myList.append(4)
>>> print myList
[1, 2, 3, 4]
__add__(self, otherList): We call this to compute the addition of two
CustomList instances when we use the + operator to add them together:
>>> myList = CustomList()
>>> otherList = CustomList()
>>> otherList.append(100)
>>> myList.append(1)
>>> myList.append(2)
>>> myList.append(3)
>>> myList.append(4)
>>> myList + otherList + otherList
[1, 2, 3, 4, 100, 100]

These are some of the ways in which class behavior can be customized with
the definition of different special methods.
Chapter 2a: Descriptors

Descriptors are an important part of Python and are very widely used. It is
vital that you understand descriptors if you want a bit of an edge over other
programmers. To help in this section of the chapter, I am going to discuss
descriptors using some common scenarios that you are likely to come across
in your programming. I will also tell you what descriptors are and how to use
them to fix these scenarios; for this, I will be using new style classes to refer
to the Python version.
Imagine a program where strict type checking must be enforced for object
attributes. Because Python is known as a dynamic language, there is no
support for type checking but we can put our own version of it in place,
despite the fact that it will be a very basic version. The example below shows
you the conventional way that we would type check an object attribute:
def __init__(self, name, age):
if isinstance(str, name):
self.name = name
else:
raise TypeError("Must be a string")
if isinstance(int, age):
self.age = age
else:
raise TypeError("Must be an int")
This method is one way that you can enforce type checking but it gets a bit
unwieldy the more arguments are added in. Another way to do it would be by
creating a type_check(type, val) function. This would be called before the
assignment in the__init__ method but how could we then implement this
checking when we want the attribute value set somewhere else? Some would
say utilize the Java method of using getters and setters but that is also
somewhat cumbersome and really isnt Pythonic.
Now imagine a program where we want attribute created that will be
initialized just one at runtime and will then turn into read only. You could
probably come up with a few special methods to implement this but it would
still be a cumbersome way of doing things.
Lastly, a program in which we customize the access to object attributes. We
could be doing this to log the access, for example. Again, not too hard to find
a solution but once again, it will be cumbersome and, perhaps more
importantly, you may not be able to re-use it.
All of these three scenarios are linked by one fact they each have a
relationship to attribute references in that we are attempting to customize the
attribute access.
How Do Python Descriptors Provide Solutions?
The solutions that descriptors provide each of these scenarios are simple,
hardy, quite beautiful to look at and can be reused. To put it simply, a Python
descriptor is an object that is representative of an attribute value. What this
means is if an account object had an attribute name, the descriptor is an
object that will represent that attributes value. A descriptor can be any kind of
object that will implement any of the following special methods - __set__,
__get__, or __delete__. Below you can see the signature for each method:
descr.__set__(self, obj, value) --> None

descr.__delete__(self, obj) --> None


```
Any object that implements the __get__ method is a non-data descriptor and
this means that they may only be read after they have been initialized. An
object that implements both __get__ and __set__ are data descriptors and this
means the attribute is writeable.
To better understand descriptors, we are now going to look at solutions to the
previous scenarios using descriptors. This makes the implementation of type
checking on object attributes very easy to do. A decorator that implements
type checking would look like this:
class TypedProperty(object):
def __init__(self, name, type, default=None):
self.name = "_" + name
self.type = type
self.default = default if default else type()

def __get__(self, instance, cls):


return getattr(instance, self.name, self.default)

def __set__(self,instance,value):
if not isinstance(value,self.type):
raise TypeError("Must be a %s" % self.type)
setattr(instance,self.name,value)

def __delete__(self,instance):
raise AttributeError("Cannot delete the attribute")

class Foo(object):
name = TypedProperty("name",str)
num = TypedProperty("num",int,42)

>> acct = Foo()


>> acct.name = "obi"
>> acct.num = 1234
>> print acct.num
1234
>> print acct.name
obi
# attempting to assign a string to a number doesnt work
>> acct.num = '1234'
TypeError: This must be a <type 'int'>
So, what we have done here is implement the TypedProperty descriptor. This
class will enforce type checking on any attribute of the class it is
representing. Do be aware that you can only define a descriptor legally at
class level and not at instance level, for example within the __init__ method,
as you can see above.
Take the Foo class instance when we access any attribute of it, the
descriptor will call the method __get__(). The first argument of this method
will be the object that the attribute represented by the descriptor references.
The __set__ method will be called by the descriptor when the attribute is
assigned.
To gain a better understanding of the reason why a descriptor may be used
for the representation of an object attribute, you first need to understand how
Python carries out the reference resolution for attributes. For an object, we
use object.__getattribute__() for attribute resolutions. This will turn b.x into
type(b).__dict__[x].__get__(b, type(b)). Next, the resolution will use a
precedence chain to look for the attribute. This chain provides the data
descriptors that are in the class called dict with priority over the instance
variables; it gives those instance variables priority over any non-data
descriptor and will assign getattr() with the lowest priority. We can override
that chain by using customized __getattribute__ methods for the object class.
Once you fully understand how descriptors work, it is so easy to picture
better solutions for the remaining two scenarios we gave earlier. By
implementing an attribute that is read-only with a descriptor will become no
harder than implementing a descriptor that doesnt have a __set__ method,
i.e. a data descriptor. If you wanted to customize access, all you would need
to do is add the functionality needed in the __get__ and __set__ methods.
Class Properties
It gets a bit tiresome to have to define these descriptor classes every time we
want one so Python has given us a better way of giving an attribute a data
descriptor. Have a look at the following property signature:
property(fget=None, fset=None, fdel=None, doc=None) -> property attribute
fdel, fset, and fget are the deleter, setter and getter methods for the class. The
following example shows you how to create a property:
class Account(object):
def __init__(self):
self._acct_num = None

def get_acct_num(self):
return self._acct_num

def set_acct_num(self, value):


self._acct_num = value

def del_acct_num(self):
del self._acct_num

acct_num = property(get_acct_num, set_acct_num, del_acct_num, "Account number


property.")
Assuming that acct is an instance of the class, Account, then acct.acct_num is
going to invoke the getter method, acct.acct_num = value is going to invoke
the setter method and del acct_num.acct_num is going to invoke the deleter
method.
The example below shows you how to use the descriptor protocol, or rules, to
implement the property object and functionality:
class Property(object):
"Emulate PyProperty_Type() in Objects/descrobject.c"

def __init__(self, fget=None, fset=None, fdel=None, doc=None):


self.fget = fget
self.fset = fset
self.fdel = fdel
if doc is None and fget is not None:
doc = fget.__doc__
self.__doc__ = doc

def __get__(self, obj, objtype=None):


if obj is None:
return self
if self.fget is None:
raise AttributeError("unreadable attribute")
return self.fget(obj)

def __set__(self, obj, value):


if self.fset is None:
raise AttributeError("can't set attribute")
self.fset(obj, value)

def __delete__(self, obj):


if self.fdel is None:
raise AttributeError("can't delete attribute")
self.fdel(obj)

def getter(self, fget):


return type(self)(fget, self.fset, self.fdel, self.__doc__)

def setter(self, fset):


return type(self)(self.fget, fset, self.fdel, self.__doc__)

def deleter(self, fdel):


return type(self)(self.fget, self.fset, fdel, self.__doc__)
Python also gives us a @property decorator that we can use to create a read-
only attribute. A property object will have the deleter, setter, and getter
decorator methods that we can use when we create copies of the property
using the right accessor function that is set to the function that has been
decorated. To explain that better, look at the following example:
class C(object):
def __init__(self):
self._x = None

@property
# the x property. the decorator creates a read-only property
def x(self):
return self._x

@x.setter
# the x property setter makes the property writeable
def x(self, value):
self._x = value

@x.deleter
def x(self):
del self._x
To make a property into a read-only property all you would do is leave the
setter method out.
Descriptors are used very widely in Python and examples of non-data
descriptors are class methods, functions, and static methods.
Chapter 3: Functions in Python

Thanks to http://intermediatepythonista.com for the code examples

Functions in Python are sets of expressions or statements that are either


anonymous or named. They are first class objects and what this means is that
there are no restrictions placed on function usage. You can use a function in
Python in the same way that you use any other value, like a number or a
string. They have attributes that we can introspect on by using the dir
function that is built into Python as per the following example:
def square(x):
return x**2

>>> square
<function square at 0x031AA230>
>>> dir(square)
['__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__', '__dict__',
'__doc__', '__format__', '__get__', '__getattribute__', '__globals__', '__hash__', '__init__',
'__module__', '__name__', '__new__', '__reduce__', '__reduce_ex__', '__repr__',
'__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'func_closure', 'func_code',
'func_defaults', 'func_dict', 'func_doc', 'func_globals', 'func_name']
>>>
Some of the more important attributes of functions are:
(__doc__ will return the docstring for the specified function)
def square(x):
"""return the square of the specified number"""
return x**2

>>> square.__doc__
'return the square of the specified number'
__name__ returns the function name.
def square(x):
"""return the square of the specified number"""
return x**2

>>> square.func_name
'square'
__module__ will return the module name where the function is defined.
def square(x):
"""will return the square of the specified number"""
return x**2

>>> square.__module__
'__main__'
func_default will return a tuple of the values of the default argument while
func_globals will return a reference that points to the dictionary holding the
global variables for the function.
def square(x):
"""will return the square of the specified number"""
return x**2

>>> square.func_globals
{'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', 'square':
<function square at 0x10f099c08>, '__doc__': None, '__package__': None}
func_dict will return the namespace that supports the attributes for the
arbitrary functions:
def square(x):
"""will return the square of the specified number"""
return x**2
>>> square.func_dict
{}
func_closure will return a tuple of the cells that hold the bindings for the free
variables in the function.
You can pass a function as an argument to another function and a function
that can take another as an argument is usually referred to as a higher-order
function. These are vital to functional programming and an example of a
higher-order function is map. This will take an iterable and a function and
will apply the function to each of the items that are in the iterable, eventually
returning a brand-new list. The next example shows how this works when we
pass the previously defined square function and an iterable that contains
numbers to map:
>>> map(square, range(10))
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
We can also define a function inside another code block of functions and they
may also be returned from another function call:
def outer():
outer_var = "outer variable"
def inner():
return outer_var
return inner
In that example, you can see that we defined a function called inner, inside
another one called outer and then returned inner when outer was executed.
You can also assign a variable with a function in the same way that you
would any object:
def outer():
outer_var = "outer variable"
def inner():
return outer_var
return inner
>>> func = outer()
>>> func
<function inner at 0x031AA270>
>>>
In this example, a function is returned by the outer function when it is called
and this function will be assigned to the variable called func; func can be
called in the same way that the returned function is called:
>>> func()
'outer variable'
Function Definitions
To create a user-defined function we use the def keyword. Any function
definition is an executable statement:
def square(x):
return x**2
Note that when the module that has the function loads into the interpreter or
is defined in the Python REPL, the definitions statement, def square(x), will
be executed.
Function Call Arguments
As well as normal arguments, functions in Python provide support for a
variable number of arguments. These come in three types as described below:
Default argument values - allows users to define default values for a
function argument. In this case, fewer arguments are needed to call the
function. Python will use the default values supplied for the arguments
that dont get supplied during the function call. The next example
shows how this works:
def show_args(arg, def_arg=1, def_arg2=2):
return "arg={}, def_arg={}, def_arg2={}".format(arg, def_arg, def_arg2)
We defined this function with a single normal positional argument called arg
and a pair of default arguments called def_arg and def_arg2. We can call this
function in any one of these ways:
By supplying only argument values that are non-default positional. The
supplied default values will be taken on by the other arguments:
def show_args(arg, def_arg=1, def_arg2=2):
return "arg={}, def_arg={}, def_arg2={}".format(arg, def_arg, def_arg2)

>>> show_args("peace")
'arg=peace, def_arg=1, def_arg2=2'
By supplying values that will override some of the default arguments as
well as the arguments that are non-default positionals:
def show_args(arg, def_arg=1, def_arg2=2):
return "arg={}, def_arg={}, def_arg2={}".format(arg, def_arg, def_arg2)

>>> show_args("peace", "to Texas")


'arg=peace, def_arg=to Texas, def_arg2=2'
By supplying values for all the arguments to override even those
arguments that have a default value:
def show_args(arg, def_arg=1, def_arg2=2):
return "arg={}, def_arg={}, def_arg2={}".format(arg, def_arg, def_arg2)

>>> show_args("peace", "to Texas", "the horse is in the barn")


'arg=peace, def_arg=to Texas, def_arg2=the horse is in the barn
You do need to be careful when you use a mutable default data structure as a
default argument. A function definition will only be executed once so the
data structures or reference values, will only be created at the time of
definition. This means that all function call uses the same data structure, as in
the following example:
def show_args_using_mutable_defaults(arg, def_arg=[]):
def_arg.append("Hello World")
return "arg={}, def_arg={}".format(arg, def_arg)
>>> show_args_using_mutable_defaults("test")
"arg=test, def_arg=['Hello World']"
>>> show_args_using_mutable_defaults("test 2")
"arg=test 2, def_arg=['Hello World', 'Hello World']"
In this example, on each of the function calls, Hell World gets added to the
list called def_arg and once two function calls have happened, there will be
two Hello World strings in the default argument. Be aware of this when you
are using a mutable default argument as a default value.
Keyword Arguments
A function may be called with the argument keyword kwarg. This refers to an
argument name that gets used in a function definition. Have a look at the next
example, a function that has been defined using positional default and
nondefault arguments:
def show_args(arg, def_arg=1):
return "arg={}, def_arg={}".format(arg, def_arg)
To see how function calls work using keyword arguments, you can call the
following function in one of several ways:
show_args(arg="test", def_arg=3)
show_args(test)
show_args(arg="test")
show_args("test", 3)

Keyword arguments are not allowed to come before a non-keyword argument


in a function call if they do, this will fail:
show_args(def_arg=4)
Functions are not allowed to supply arguments with duplicate values, so this
will be illegal:
show_args("test", arg="testing")
In this argument, arg is classed as a positional argument so we assign the
value of test to it. If you then attempted to assign the keyword arg again it
would be multiple assignments and this is seen as illegal.
All keyword arguments that are passed need to match up with an argument
that the function has accepted. The keyword order, and that includes non-
optional arguments, isnt important so the next example, where we have
changed the argument order, is allowed:
show_args(def_arg="testing", arg="test")
Arbitrary Argument List
Python also provides support for function definitions where the function will
take an arbitrary number of arguments that have been passed into the function
in the form of a tuple. This next example shows you how this works:
def write_multiple_items(file, separator, *args):
file.write(separator.join(args))
The arbitrary number of arguments is to follow any normal arguments; in the
example above, that is after the file argument and the separator argument.
The next example shows calls to the function that was defined in the previous
example:
f = open("test.txt", "wb")
write_multiple_items(f, " ", "one", "two", "three", "four", "five")
Note that the arguments are all grouped in one tuple and we can use the args
argument to access this tuple.
Unpacking Function Argument
You may sometimes have a function call argument in a list, a tuple or a dict.
We can unpack these arguments into the functions for the function calls using
either the * or the ** operator. Have a look at this function taking two
positional arguments and then printing the value:
Sometimes, we may have arguments for a function call either in a tuple, a list
or a dict. These arguments can be unpacked into functions for function calls
using * or ** operators. Consider the following function that takes two
positional arguments and prints out the values
def print_args(c, d):
print c
print d
If we already had the values that were to be given to the function stored a list,
we could directly unpack them into the function, like this:
>>> args = [3, 4]
>>> print_args(*args)
3
4
When we have the keywords, in much the same way, we can make use of a
dict to store the kwarg to the value mapping and we can use ** for unpacking
the arguments into the functions, like this:
>>> def chicken(voltage, state=stone dead, action=boom):
print "-- This chicken wouldnt", action,
print "if you put", voltage, "volts through it.",
print "Es", state, "!"

>>> d = {"voltage": "five million", "state": "already dead", "action": "BOOM"}


>>> chicken(**d)
>>> This chicken wouldnt BOOM if you put five million volts through it. Es already
dead
Using * and ** to Define a Function
When you define a function, you might not know before you do how many
arguments there will be. This will lead to function definitions with this
signature:
show_args(arg, *args, **kwargs)
*args represents a sequence of positional arguments of an unknown length
while **kwargs is representative of a dict containing the value mappings for
the keyword name, and this can be any number of value mappings. *args
always comes before **kwargs when defining a function. Have a look at the
next example:
def show_args(arg, *args, **kwargs):
print arg
for item in args:
print args
for key, value in kwargs:
print key, value

>>> args = [1, 2, 3, 4]


>>> kwargs = dict(name='testing', age=34, year=2017)
>>> show_args("hey", *args, **kwargs)
hey
1
2
3
4
age 34
name testing
year 2017
While the function must be given the normal argument, *args and **kwargs
are always optional, as in this example:
>>> show_args("hey", *args, **kwargs)
hey

At the time of the function call, the normal argument is given as normal and
the optionals get unpacked into the call.
Anonymous Functions
Anonymous functions are also supported in Python and we create these with
the lambda keyword. Python lambda expressions take this form:
lambda_expr ::= "lambda" [parameter_list]: expression
A lambda expression will return a function object once it has been evaluated
and its attributes will be named functions. They are normally reserved for
simple Python functions, like this:
>>> square = lambda x: x**2
>>> for i in range(10):
square(i)
0
1
4
9
16
25
36
49
64
81
>>>
This is the same as the named function below:
def square(x):
return x**2
Nested Functions and Closures
A function that is defined inside another function is called a nested function,
as in the following example:
```python
def outer():
outer_var = "outer variable"
def inner():
return outer_var
return inner
```
In a function definition of this type, the inner will only in scope when it is
inside the outer so it is at its most useful when the inner is being returned or
moved to the outer scope or when it gets passed to another function.
In a nested function, a new nested function instance is created on every call to
the outer function. The reason for this is that when the outer function is
executed, the new inner definition is executed but the body isnt.
Nested functions can access the environment they were created in and this is
directly down to the semantics of function definition in Python. A result of
this is that a variable that has been defined in the outer may be referenced
even when the outer function has been executed.
def outer():
outer_var = "outer variable"
def inner():
return outer_var
return inner

>>> x = outer()
>>> x
<function inner at 0x0273BCF0>
>>> x()
'outer variable'
When a reference variable is accessed by the outer function of a nested
function, the nested function is known as closed over the referenced
variable. We use the __closure__ special attribute to get into the closed
variables, like this:
>>> cl = x.__closure__
>>> cl
(<cell at 0x029E4470: str object at 0x02A0FD90>,)

>>> cl[0].cell_contents
'outer variable'
Python closures are a little odd in nature. In Python 2.x or earlier, a variable
that pointed to an immutable type, like a number or string, could not rebound
inside a closure, as in this example:
def counter():
count = 0
def c():
count += 1
return count
return c

>>> c = counter()
>>> c()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in c
UnboundLocalError: local variable 'count' referenced before assignment
A somewhat odd solution would be to use a mutable type as a way of
capturing the closure, like this:
def counter():
count = [0]
def c():
count[0] += 1
return count[0]
return c

>>> c = counter()
>>> c()
1
>>> c()
2
>>> c()
3
In Python 3, we saw the introduction of the keyword, nonlocal, and this is
what we use to fix the issue of closure scoping, like this:
def counter():
count = 0
def c():
nonlocal count
count += 1
return count
return c
We can use a closure for maintaining a state and, in a few simple cases, to
provide a much better solution than a class, which is what we would normally
use to maintain a state. To show you how this works, we have taken a logging
example; just imagine an unimportant API that uses object orientation that is
class based to log at several levels:
class Log:
def __init__(self, level):
self._level = level

def __call__(self, message):


print("{}: {}".format(self._level, message))

log_info = Log("info")
log_warning = Log("warning")
log_error = Log("error")
We can implement the same functionality with closures like this:
def make_log(level):
def _(message):
print("{}: {}".format(level, message))
return _

log_info = make_log("info")
log_warning = make_log("warning")
log_error = make_log("error")
As you can see, use the way that is closure-based is much better and more
readable even though it is used for implementing the exact same function.
Chapter 4: Generators and Iterators

Thanks to http://sahandsaba.com for the code examples


There is quite a lot about Python that attracts mathematicians; the support that
is built in for sets, lists and tuples, for a start, along with the notations that are
very much like the notation we see with conventional math, and list
comprehensions, much like set comprehensions, along with the set-builder
notation that goes with them.
One other set of attractive features for those that have mathematical minds
are the iterators and generators in Python, along with the itertools package
that goes with them. These are what makes it easy to write nice-looking
simple to read code that will deal with a mathematical object as an infinite
sequence, a recurrent relation, a stochastic process and a combinatorial
structure. This chapter will cover both generators and iterators and includes
plenty of hands-on working examples so you can see how it all works.
Iterators
An iterator is an object that iterates over a collection. The collections do not
have to be objects that are already in memory and they dont need to be finite
either. Lets dig down a bit deeper. Iterables are defined as objects that have a
method called __iter__ and this method must return an object that is an
iterator. In turn, an iterator has two methods; as well as __iter__, it has
__next__ - the former will return the iterator object while the latter will return
the next element in the iteration. Because an iterator is its own iterator, it will
always return __self__ in the iterator method.
To be honest, you shouldnt really call the __iter__ method or __next__
directly. If you use list comprehensions or for, Python calls them
automatically but, if you do need to manually call them, Python has special
functions built-in for the purpose use iter or next and then pass the
container or the iterator as a parameter to that function. For example, if b
were an iterable you would use iter(b) and not b.__iter__() and, in the same
way, if d were the iterable, you would use iter(d) and not d.__iter__(). This is
much the same way that you would use the len() function.
Talking of len(), it is worth a mention that an iterator doesnt have to, and
very often won't, have a length that is well-defined. As such, they will rarely
be used to implement __len__. If you are looking to count how many items
are inside an iterator, you will need to do it manually or you can use sum.
Not all iterables are iterators; instead, their iterator may be a different object.
For example, a list object will be an iterable but it will not be an iterator. In
other words, it implements __iter__ but it wont implement __next__. A list
object iterator is of listiterator type as the next example shows you. Also,
look at the way a list object has a length that is well-defined but a listiterator
object doesnt:
>>> a = [1, 2]
>>> type(a)
<type 'list'>
>>> type(iter(a))
<type 'listiterator'>
>>> it = iter(a)
>>> next(it)
1
>>> next(it)
2
>>> next(it)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>> len(a)
2
>>> len(it)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'listiterator' has no len()
The Python interpreter expects a StopIteration exception to be raised when
the iterator has finished. However, an iterator is able to iterator over infinite
sets and, for these, it is the responsibility of the user to make sure that the
way they use the iterator does not result in an infinite loop. You will see an
example of this in a while. First, lets look at an example of a basic iterator
it will begin counting at 0 and continue incrementing indefinitely. This is a
simple version of itertools.count:
class count_iterator(object):
n=0

def __iter__(self):
return self

def next(self):
y = self.n
self.n += 1
return y
An example of the usage can be seen below. Note that the final lone of the
program tries to convert the object into a list and the result will be an infinite
loop, simply because that specific iterator will never end:
>>> counter = count_iterator()
>>> next(counter)
0
>>> next(counter)
1
>>> next(counter)
2
>>> next(counter)
3
>>> list(counter) # This will result in an infinite loop!
Lastly, if you want to be accurate, you should amend the above code if an
object does not have the __iter__ method defined, provided __getitem__ has
been defined, it can remain an iterable. In a case like this, the iter function
that is built into Python will return an iterator that is of an iterator type for
that object, using __getitem__ to iterate over the list items.
Finally, to be accurate we should make an amendment to the above: objects
that do not have the method __iter__ defined can still be iterable if they
define __getitem__. In this case when Python's built-in function iter will
return an iterator of type iterator for the object, which uses __getitem__ to go
over the items in the list. If __getitem_ raises an IndexError or a
StopIteration exception, the iteration will stop. Look at an example of how
that works:
class SimpleList(object):
def __init__(self, *items):
self.items = items

def __getitem__(self, i):


return self.items[i]
And its use:
>>> a = SimpleList(1, 2, 3)
>>> it = iter(a)
>>> next(it)
1
>>> next(it)
2
>>> next(it)
3
>>> next(it)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
A more interesting way of looking a this is in the generation of the Hofstadter
Q sequence, using an iterator and given specific initial conditions. This
nested recurrence was first brought up in Hofstadter's book, Gdel, Escher,
Bach: An Eternal Golden Braid and ever since, the issue of proving that the
Hofstadt sequence is defined well enough for all values of n has been
somewhat open. The next example makes use of an iterator for the generation
of a sequence that is provided by the nested recurrence:
Q(n)=Q(nQ(n1))+Q(nQ(n2))
The list should be given as the initial conditions. qsequence([1,1]) would
generate the Hofstadter sequence exactly. The StopIteration exception is used
to indicate that the sequence is not able to go any further because, for the next
element to be generated, we need an invalid index. If, for example, the initial
conditions are [1,2] the sequence would immediately terminate:
class qsequence(object):
def __init__(self, s):
self.s = s[:]

def next(self):
try:
q = self.s[-self.s[-1]] + self.s[-self.s[-2]]
self.s.append(q)
return q
except IndexError:
raise StopIteration()

def __iter__(self):
return self

def current_state(self):
return self.s
And this is how it gets used:
>>> Q = qsequence([1, 1])
>>> next(Q)
2
>>> next(Q)
3
>>> [next(Q) for __ in xrange(10)]
[3, 4, 5, 5, 6, 6, 6, 8, 8, 8]

Generators
A generator is an iterator that has been defined with a function notation that is
a little simpler. Basically, the generator is a function that contains a yield
expression. They are not able to return a value; instead, when they are ready
they yield a result. The process to remember the context of a generator is
automated by Python. The context is the value of the local variables, the
location of the control flow, etc. Whenever a generator is called using
__next__, the yield will be the next iteration value. __iter__ is also
implemented automatically and this means that a generator may be used
wherever you need an iterator. Look at the next example, showing you an
implementation that is functionally the same as the iterator class we looked at
earlier but it is easier to read:
def count_generator():
n=0
while True:
yield n
n += 1
Now lets see how it works:
>>> counter = count_generator()
>>> counter
<generator object count_generator at 0x106bf1aa0>
>>> next(counter)
0
>>> next(counter)
1
>>> iter(counter)
<generator object count_generator at 0x106bf1aa0>
>>> iter(counter) is counter
True
>>> type(counter)
<type 'generator'>
Now we are going to use a generator to implement the Hofstadter Q
sequence. Note that this is a much easier implementation but we are no
longer able to implement methods like current_state, that we used earlier.
You may not access a variable stored in a generator context from outside the
generator and that means that you will not be able to access current_state
from the object.
def hofstadter_generator(s):
a = s[:]
while True:
try:
q = a[-a[-1]] + a[-a[-2]]
a.append(q)
yield q
except IndexError:
return
Note that we have use a return statement that has no parameters to end the
generator iteration. This will raise a StopIteration exception internally.
In the next example, we are using the Groupon Randomness Extraction
Interview Question. We use one generator to implement a Bernoulli process
this is a sequence f Boolean values in a random order which is infinite with
False probability q=1-p and Truehaving probability. Another generator is
used for implementing a von Neumann extractor that will take the Bernoulli
process as an input with 0<p<1 as an entropy source, returning a Bernoulli
process of p=0.5.
import random

def bernoulli_process(p):
if p > 1.0 or p < 0.0:
raise ValueError("p must be between 0.0 and 1.0.")

while True:
yield random.random() < p

def von_neumann_extractor(process):
while True:
x, y = process.next(), process.next()
if x != y:
yield x
Lastly, a generator is a good tool for when we want to implement a discrete
dynamical system. The next example will show you how the tent map
dynamical system can be properly implemented with the use of generators:
>>> def tent_map(mu, x0):
... x = x0
... while True:
... yield x
... x = mu * min(x, 1.0 - x)
...
>>>
>>> t = tent_map(2.0, 0.1)
>>> for __ in xrange(30):
... print t.next()
...
0.1
0.2
0.4
0.8
0.4
0.8
0.4
0.8
0.4
0.8
0.4
0.8
0.4
0.8
0.4
0.8
0.4
0.799999999999
0.400000000001
0.800000000003
0.399999999994
0.799999999988
0.400000000023
0.800000000047
0.399999999907
0.799999999814
0.400000000373
0.800000000745
0.39999999851
0.79999999702
Another example that is similar is the Collatz sequence:
def collatz(n):
yield n
while n != 1:
n = n / 2 if n % 2 == 0 else 3 * n + 1
yield n
Again, note that we dont need to raise the StopIteration exception manually
because it will be raised automatically when the control flow gets to the end
of the function. The next example shows you the Collatz generator being
used:
>>> # If the Collatz conjecture were true then list(collatz(n)) for any n will
... # always terminate (though you might find your machine runs out of memory before!)
>>> list(collatz(7))
[7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
>>> list(collatz(13))
[13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
>>> list(collatz(17))
[17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
>>> list(collatz(19))
[19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1]
Recursive Generators
A generator may be recursive, in the same way that any function may be
recursive. To show you how this works, we will implement a simple version
of itertools.permutations, which is a generator that will generate all the
permutations of a specified item list. Note, rather than the method you will
see here, do use itertools.permutations as it will be a lot quicker. The idea
here is very simple we are going to swap each element in a list with the first
one, thus moving each to the first position; the rest of the list is them
permuted:
def permutations(items):
if len(items) == 0:
yield []
else:
pi = items[:]
for i in xrange(len(pi)):
pi[0], pi[i] = pi[i], pi[0]
for p in permutations(pi[1:]):
yield [pi[0]] + p
>>> for p in permutations([1, 2, 3]):
... print p
...
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
Generator Expressions
A generator expression allows you to define a generator using a simple
notation, very much like the notation for list comprehension in Python. The
next example will provide us with a generator that will iterate over every
perfect square. Note that the generator expression results are objects of the
generator type and, because of this, will implement both the __iter__ and the
__next__ methods.
>>> g = (x ** 2 for x in itertools.count(1))
>>> g
<generator object <genexpr> at 0x1029a5fa0>
>>> next(g)
1
>>> next(g)
4
>>> iter(g)
<generator object <genexpr> at 0x1029a5fa0>
>>> iter(g) is g
True
>>> [g.next() for __ in xrange(10)]
[9, 16, 25, 36, 49, 64, 81, 100, 121, 144]
The Bernoulli process can also be implemented with a generator expression,
in the next example, that expression is p=0.4. If another iterator is required by
the generator expression for a loop counter, the best choice would be
itertools.count especially if an infinite sequence is to be the result. Otherwise,
you can use xrange:
>>> g = (random.random() < 0.4 for __ in itertools.count())
>>> [g.next() for __ in xrange(10)]
[False, False, False, True, True, False, True, False, False, True]

As I said earlier, you can pass a generator function to any other function that
needs an iterator as one of its arguments. For example, we could write the
following to sum up the first 10 perfect squares:
>>> sum(x ** 2 for x in xrange(10))
285
Chapter 5: Lambda, Map, Filter, & Reduce

Thanks to http://m.blog.csdn.net for the code examples


Lambda Operator
There are those who love the lambda operator and there are those who hate it;
there are even those who seem to be a little afraid of it. The lambda operator
is nothing to be scared of and, by the time you get to the end of this chapter,
you will love it.
The lambda operator, otherwise known as the lambda function, is a way of
allowing us to create anonymous and small functions, i.e. functions that dont
have names. These are what is known as a throw away function they are
needed only at the location they have been created in. We mainly use a
lambda function together with the filter(), the map() and the reduce()
functions. It was added into Python following a large demand from
programmers who use Lisp.
Lambda syntax is very simple:
>>> f = lambda x, y : x + y
>>> f(1,1)
2
The map() Function
The lambda operator has a big advantage and we can see what this is when
we use it together with the map() function. The map() function will have two
arguments:
r = map(func, seq)
The first is func and this is the function name; the second is seq and is a list
or other sequence. seq.map() will apply the function called func to every
element in the sequence and a new list is returned with all the elements
altered by func:
def fahrenheit(T):
return ((float(9)/5)*T + 32)
def celsius(T):
return (float(5)/9)*(T-32)
temp = (36.5, 37, 37.5,39)

F = map(fahrenheit, temp)
C = map(celsius, F)
In this example, we have not used the lambda function. If we had, we would
not have needed to define and give the functions fahrenheit() and celsius()
names. This becomes clear in the next example:
>>> Celsius = [39.2, 36.5, 37.3, 37.8]
>>> Fahrenheit = map(lambda x: (float(9)/5)*x + 32, Celsius)
>>> print Fahrenheit
[102.56, 97.700000000000003, 99.140000000000001, 100.03999999999999]
>>> C = map(lambda x: (float(5)/9)*(x-32), Fahrenheit)
>>> print C
[39.200000000000003, 36.5, 37.300000000000004, 37.799999999999997]
>>>
We can apply map() to multiple lists but each list must be the same length.
map() applies the lambda function to all the elements of the lists in the
arguments it will start with the 0th index and then move on to the 1st index,
2nd index and continue until it reaches the nth index:
>>> a = [1,2,3,4]
>>> b = [17,12,11,10]
>>> c = [-1,-4,5,9]
>>> map(lambda x,y:x+y, a,b)
[18, 14, 14, 14]
>>> map(lambda x,y,z:x+y+z, a,b,c)
[17, 10, 19, 23]
>>> map(lambda x,y,z:x+y-z, a,b,c)
[19, 18, 9, 5]
This example shows you that the parameter called x will obtain its values
from the list called a, while y will obtain its values from b; z gets them from
the list called c.
Filtering
The function called filter gives us a nice way of filtering out the elements in
a list and the function called function will return a result of True. The
function filter requires a function f as the first argument. f will return a
Boolean value, either False or True and this function gets applied to each
element in the list called l. If f returns a True, the list element will then be
added in to the result list; if the result is false, it will not be included.
>>> fib = [0,1,1,2,3,5,8,13,21,34,55]
>>> result = filter(lambda x: x % 2, fib)
>>> print result
[1, 1, 3, 5, 13, 21, 55]
>>> result = filter(lambda x: x % 2 == 0, fib)
>>> print result
[0, 2, 8, 34]
>>>
Reducing a List
The function called reduce() will repeatedly apply the function called func()
to the sequence called seq and the return will be a single value.
If seq = [s1, s2, s3, sn], calling reduce (func, seq), works in this way:
To start with, the initial two elements in seq are applied to func so the list that
reduce() is working on will now look like [func(s1, s2), s3, , sn].
Next, func is applied to the last result and to the third element in the list and
the list will now look like [ func(func(s1, s2),s3, .sn]
This continues until there is a single element left; this is the element that is
returned as the reduce() result. The next example shows you how this works:
>>> reduce(lambda x,y: x+y, [47,11,42,13])
113
reduce() Examples
How to work out a numerical value list maximum with reduce():
>>> f = lambda a,b: a if (a > b) else b
>>> reduce(f, [47,11,42,102,13])
102
>>>
How to use reduce() to work out the sum of a list of numbers from 1 to
100:
>>> reduce(lambda x, y: x+y, range(1,101))
5050
Conclusion

Well, we have reached the end of our peek into intermediate Python
programming and I hope that I have been able to teach you, at the very least,
the basic concepts. Obviously intermediate programming is much harder than
Python for beginners but, provided you have a good understanding of the
absolute basic concepts of Python, you shouldnt do too badly with the
intermediate.
Please dont hesitate to read this book over and over until you are
comfortable with the contents; it wont do to move straight on to advanced
Python concepts until you are you will simply find yourself lost in the mire.
There is plenty of help available for intermediate students, lots of forums and
full Python courses that can help you to truly understand what you are doing
before you attempt anything even harder.
Once again, I hope this course has been of some help you to you.

You might also like