Largely due to a perfect storm of work, personal life changes, imposter syndrome, and burn out. But we don't need to go into which of us experienced which.

We want to thank everyone who wrote asking if we're okay.

We are, and we're excited to be publishing Episode 2 soon. It was recorded last April, so only half of the contents have already become outdated.

We're also developing new episodes, without that vintage feeling.

So thank you for your patience, and please watch this space for more in the very near future.

Your faithful friends from Python Out Loud.

--Kevin and Isaiah

]]>`**`

, not `^`

Suppose you want to calculate a large number like 2 to the 38th power. If you've used a Texas Instruments graphing calculators or a mathematical typesetting language like LaTeX, you might instinctively try the following in Python:

```
>>> 2^38
36
```

After a quick Google search, it's clear the caret (`^`

) denotes "bitwise exclusive or", which isn't at all what we wanted. Per Python's documentation, we could instead write the computation in either of the following ways:

```
>>> 2**38
274877906944
>>> pow(2,38)
274877906944
```

Even though `2**38`

and `pow(2,38)`

produce the same number, through, one of them is working harder behind the scenes. To see this, consider a simple timing experiment. Using the `timeit`

module, we can find the average execution time when each operation is repeated one million times:

```
>>> import timeit
>>> timeit.timeit('2**38', number=1000000)
0.01468262099660933 # this is about one-seventieth of a second
>>> timeit.timeit('pow(2,38)', number=1000000)
0.3849993069889024 # this is about one-third of a second
```

In other words, `2**38`

takes about one-seventieth of a second, while `pow(2,38)`

takes more than one-third of a second. That's a big difference!

`2**38`

is faster than `pow(2,38)`

To see why `2**38`

is faster, we can use the `dis`

module to dissemble each operations into bytecode. In other words, we can turn each line of code into its underlying set of fundamental instructions:

```
>>> import dis
>>> dis.dis('2**38')
1 0 LOAD_CONST 2 (274877906944)
2 RETURN_VALUE
>>> dis.dis('pow(2,38)')
1 0 LOAD_NAME 0 (pow)
2 LOAD_CONST 0 (2)
4 LOAD_CONST 1 (38)
6 CALL_FUNCTION 2
8 RETURN_VALUE
```

In particular, `pow(2,38)`

disassembled form included the bytecode `CALL_FUNCTION`

, while `2**38`

does not. Given Python's reputation for having expensive function calls, it seems reasonable for `pow(2,38)`

to be an order of magnitude slower than `2**38`

.

`pow != math.pow`

However, not all function calls are created equal, and the number of lines of bytecodes is not necessarily indicative of execution time. To see this, here's yet another method for computing 2 to the 38th power, along with it's disassembled form:

```
>>> import math
>>> math.pow(2,38)
274877906944.0
>>> import timeit
>>> timeit.timeit('import math; math.pow(2,38)', number=1000000)
0.277645947993733 # this is about one-quarter of a second
>>> import dis
>>> dis.dis('import math; math.pow(2,38)')
1 0 LOAD_CONST 0 (0)
2 LOAD_CONST 1 (None)
4 IMPORT_NAME 0 (math)
6 STORE_NAME 0 (math)
8 LOAD_NAME 0 (math)
10 LOAD_ATTR 1 (pow)
12 LOAD_CONST 2 (2)
14 LOAD_CONST 3 (38)
16 CALL_FUNCTION 2
18 POP_TOP
20 LOAD_CONST 1 (None)
22 RETURN_VALUE
```

Despite having the same name, the built-in function `pow`

and the method `math.pow`

from the `math`

module are quite different. The built-in version performs exact integer-multiplication, allowing arbitrarily large integers to be created (try it -- it's fun!). The version from the `math`

module, though, can produce approximate results more quickly using floating-point operations. However, this also means `math.pow`

is susceptible to overflow when a non-integer becomes too large for Python to represent:

```
>>> import math; math.pow(2,138)
3.48449143727041e+41
>>> import math; math.pow(2,1038)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OverflowError: math range error
```

In case you're not tired of computing `2**38`

, we came up with three more methods, and will leave it up to you to provide your own bytecode disassembly for the first two:

```
>>> 1<<38 # compute 2**38 by shifting a bit left 38 times
274877906944
>>> import timeit
>>> timeit.timeit('1<<38', number=1000000)
0.015056486008688807 # this is about one-seventh of a second
>>> import functools
>>> import operator
>>> functools.reduce(operator.mul, [2]*38) # compute 2**38 by explicitly building 2*2*...*2
274877906944
>>> timeit.timeit('import functools; import operator; functools.reduce(operator.mul, [2]*38)', number=1000000)
3.335591795970686 # this is more than three seconds!
>>> (2).__pow__(38) # compute 2**38 using a magic method call
274877906944
>>> timeit.timeit('(2).__pow__(38)', number=1000000)
0.39968948398018256 # this is about one-third of a second
>>> dis.dis('(2).__pow__(38)')
1 0 LOAD_CONST 0 (2)
2 LOAD_ATTR 0 (__pow__)
4 LOAD_CONST 1 (38)
6 CALL_FUNCTION 1
8 RETURN_VALUE
```

Working our way through these examples,

the obtuse bit-shifting trick for computing powers of

`2`

is the fastest;explicitly building

`2`

multiplied by itself`38`

times is the slowest; andhaving the integer

`2`

explicitly call the so-called "magic method"`__pow__`

has roughly the same performance and bytecode representation as`pow(2,38)`

. (Note that parentheses are necessary around 2. This prevents Python from assuming the period is a decimal point and throwing a SyntaxError. Ask us how we know.)

The notation `(2).__pow__(38)`

looks strange, but it's essentially how Python is computing `pow(2,38)`

under the hood. Everything in Python is represented as an object, even integers, and all objects can be used to invoke methods from the class they represent. And per the documentation for the operator module, writing `pow(2,38)`

is the same as having `2`

invoke the class method `__pow__`

, and the magic method is what actually produces the value of `2**38`

.

In other words, `pow(2,38)`

reduces to `(2).__pow__(38)`

, which takes the long way to get to `2**38`

. Just writing `2**38`

on it's own, though, takes even more shortcuts to get to the 12th-digit number `274,877,906,944`

.

`2^38`

make sense in PythonUnderstanding how `pow(2,38)`

reduced to `(2).__pow__(38)`

is more than an academic exercise. Once we realize the interconnectedness of binary operators and magic methods, we can (at least attempt to) make the world a better place. Here's a quick example:

```
>>> import builtins
>>> class BetterInt(int):
... def __xor__(self, exponent):
... return self**exponent
...
>>> builtins.int = BetterInt
>>> two = int(2)
>>> two^38 # = two.__xor__(38) = 2**38
274877906944
```

We first import the `builtins`

module, which keeps track of Python's built-in classes and objects.

Then, we build a new class called `BetterInt`

, inheriting all of the properties from the builtin `int`

class, but defining a new implementation of the `__xor__`

magic method. In other words, we're bending the universe to our will by writing our own version of the magic method Python calls to resolve the caret (`^`

) operator.

Finally, just to compound the madness, we tell Python to throw away the builtin version of integers and use our `BetterInt`

class instead. And this "better integer" implementation allows us to finally compute exponents just like we're used to outside of Python.

Who needs a "bitwise exclusive or" operator, anyway?

]]>