# Personal Blogs

## Deriving Gamma in Special Relativity

Visible to anyone in the world
Edited by Martin Thomas Humby, Thursday, 27 June 2024, 19:03

This is an addendum to some aide-mémoire notes I am making on Relativity. Perhaps somebody may find it useful. Nowadays aide-mémoire has become more and more of a requirement for me.

In Special Relativity (SR), gamma (Greek lower case), aka the Lorentz Factor, signifies the multiplier required to obtain a dilated (increased) time interval between two co-located events in inertial frame A as observed from another inertial frame B where A and B are in relative motion. Time dilation increases with relative velocity so gamma can be written as a function . Gamma is ubiquitous in SR calculations.

Observers in both frames will measure the same spacetime separation between events occurring in either frame but, to allow a simple time-only translation using just , the events in A must be co-located: – in the same spatial location in the frame, that is.

Time effects and most other outcomes of SR are symmetrical. If A sees B’s clock running slow then B will see A’s clock slow. This must be so to maintain the relativity principle: the motion of any entity can only be defined in relation to another entity and consequently absolute motion or rest have no meaning. With all motion being relative, observations made from one frame must mirror those made from the other.

Similarly, light (electromagnetic radiation) is not subject to Galilean velocity addition and has the same absolute speed for all observers moving or otherwise. This is a requirement if electromagnetic phenomena are to show the same results in all locations as indeed they do. Both these requirements need additional examination elsewhere.

The derivation

For various reasons I found it difficult to find a convincing derivation for gamma. There was symbol inconsistency between sources with the common use of get-outs such as ‘rearranging,’ ‘simplifying’ foxing my rudimentary algebra skills. Some internet sources are just plain wrong I think. It was amusing to see that various online ‘AI’ equation solvers produced results with little resemblance to Einstein’s. Below is a derivation that shows all algebraic steps explicitly.

Most derivations, apart from Einstein (1905 p7), feature light clocks and Einstein does appear to mention the elongated light path seen in a moving frame:

In fact the mirrors, shown vertically spaced, can have any orientation at 90º to the direction of travel.

The norm is to consider a full clock cycle where a light pulse leaving the bottom mirror is reflected off the top mirror and returns. The ratio between the length of half a cycle in the stationary clock and half in the moving clock is obviously the same but a possible justification for the full-cycle approach is to emphasize that the two events, leaving and returning, must be co-located. Following this option:

If the time taken for a full cycle of the stationary clock is , the distance the light pulse travels at velocity between the mirrors is:

Similarly, in the moving clock if is the time taken for a full cycle, the increased distance the light travels along a single diagonal is . The distance the clock has moved during half a cycle is where is the relative velocity of the moving frame. The spacing of the mirrors remains unchanged. Applying Pythagoras’ theorem to the right triangle in Figure 1 we can equate light travel distance to and :

Combining equations (1) and (2)

expanding the brackets

then multiplying both sides by 4 and dividing by

rearranging gives

multiplying both sides by -1

take out

square root both sides

or   may be preferred.

From this, the period of one (or n) clock cycles of the local clock is equal to the period of the same number of clock cycles of the moving clock multiplied by . The longer a clock cycle the slower the clock will run so dilated intervals for this clock as seen by an external observer will involve the reciprocal

where  and are the intervals between events seen in their respective frames.

Examples

gives intervals, as seen by an observer in frame A, between co-located events in B. If a spaceship B has a relative velocity of half the speed of light relative to space-station A then

Say that after every minute that passes by shiptime B transmits a time signal – these transmissions constitute co-located events in B. Standing off at right angles to B’s course to minimize any Doppler effect, space-station A receives signals at 1.1547 minute intervals. So, for every 1.1547 minutes of time that passes in A only 1 minute has elapsed in B.

To obtain the reduced time period experienced in moving frame B we need the reciprocal of gamma:

From A’s perspective it will take B a year to travel half a lightyear but for B the time elapsed is 1/1.1547 = 0.8660 years.

Acknowledgement

The above derivation is based on Gray (2022) expanded and annotated with basic examples. Any misunderstandings and errors are without doubt my own.

References

Einstein, A. (1905) On the Electrodynamics of Moving Bodies. (trans. Dr. Anna Beck, and consultant Professor Peter Havas), Princeton Digital Einstein Papers [online] https://dn790004.ca.archive.org/0/items/einstein-1905-relativity/Einstein_1905_relativity.pdf

Gray, N. (2022) A Student’s Guide to Special Relativity, Cambridge University Press

Share post

## Install Packet Tracer: Linux Mint (Cinnamon)

Visible to anyone in the world
Edited by Martin Thomas Humby, Monday, 22 Oct 2018, 00:05

Open the file-manager (Nemo) and double click the archive to open in the archive manager
[Extract] to a new folder say ‘PT72’.
Open PT72 in Nemo, double click the ‘install’ file and select [Run in Terminal]

To read the EULA press Return then repeatedly press Space.
Enter ‘Y’ to accept then ‘enter’ (Return) to install in the default location: /opt/pt
Enter ‘Y’ to get root access followed by your password (password characters do not show as stars under Linux bash : the field remains blank while typing)

Click desktop and open a terminal Alt+Ctrl+T. Enter ‘packettracer’. Accept default location for user files e.g. /home/martin/pt and log-in to CISCO: Packet Tracer opens.

Close Packet Tracer

Right click on the Menu icon: Configure…
Select desired group say ‘Programming’ in the Applications tree on the left then [New Item]

Name: Packet Tracer 7.2
Command: packettracer
Comment: Networking simulation

Click on the rocket to provide an appropriate icon, navigating to say + Other locations > Computer/opt/pt/art/app.png

[OK]
Lauch Packet Tracer from the start Menu

Delete the installation files directory ‘PT72’ or whatever.
Share post

## Installing apache cordova under Linux: Summary

Visible to anyone in the world
Edited by Martin Thomas Humby, Sunday, 22 July 2018, 16:01

Installing Cordova under Linux Ubuntu / Mint is simple but many guides found on the web and elsewhere are out of date and/or hard to follow. This post is a list of commands for those faced with an incomprehensible set of instructions and as an aid mémoire for me. Commands for installation were tested using a fresh copy of Linux Mint 18.2 (Ubuntu 16.04 Xenial Xerus). As recommended, Cordova is installed using the JavaScript package manager NPM.

Only minimal experience using the command line is assumed. Reminder: paste in a terminal window is Edit > Paste or Ctrl+Shift+V. The sudo password expires after 15 minutes (by default) so probably best to copy and paste the first sudo prefixed command separately for each section. With this proviso commands can be copied and pasted as a block (ensure the basic command prompt shows after or press return to get it). After sudo all characters in the user password you type are hidden – no dots or * displayed. Texts following # are my remarks and can be in included in copy or not as convenient.

Open a terminal window
Click on the desktop and press Ctrl+Alt+T. The command prompt shows:

martin@mint18 ~ $# for example The prompt is shown below where a particular current directory is assumed in commands – avoid including bits of it in copy . Check Java is installed java -version If Oracle Java has been installed previously this should get something like java version "1.8.0_144"Java(TM) SE Runtime Environment (build 1.8.0_144-b01)Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode If it is not present enter the commands sudo add-apt-repository ppa:webupd8team/java sudo apt-get updatesudo apt-get install oracle-java8-installer A similar painless method can be used to install Java 10. If Open Java is present from java -version it is probably best to uninstall it. Check Node.js installed nodejs --version If not found download an installer for Node.js LTS version: curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -sudo apt-get update # essential for a correct install and do the install sudo apt-get install -y nodejs Node.js should include the JavaScript package manager NPM so check it worked by getting versions: nodejs -vsudo npm -v # sudo is optional here but will save a spurious update-check warning Install the Cordova application sudo npm install -g cordovasudo chown -R$USER ~/.config/configstore   # give the user (you) access after NPM install as rootcordova # should display a help screen

Set system variables

Cordova requires two new system variables JAVA_HOME and ANDROID_HOME and modification of the PATH variable.

JAVA_HOME

NB: I have shown the Java version as 10 in all occurrences below because this blog renders  8–o as an emoticon copying from the page as 'surprise'. Version 8 should be used until Android Studio / SDK catches up.
sudo find / -name java-10-oracle  # or java-8

This will return something like

[sudo] password for martin: find: ‘/run/user/1000/gvfs’: Permission denied/etc/java-10-oracle/usr/lib/jvm/java-10-oracle

The JVM location is the one we want.

Find the name of the text editor that came with your system (Help > About from its menu bar). Open a new terminal window and run the editor with root privileges

sudo editor-name    #  Do not close the terminal window or this may close the editor instance# alternatively sudo editor-name /etc/environment # should open the required file directly

Edit  /etc/environment adding a line at the bottom of the file

JAVA_HOME="/usr/lib/jvm/java-10-oracle"  # or substitute java-8
Save the modified file, close the editor and terminal window.

ANDROID_HOME and PATH

To generate an Android app Cordova requires the Android SDK and Gradle. Both should be installed under your home directory with their locations identified in these system variables. If Android Studio is installed at a later date, when you first create a project it will copy the SDK and more to a directory ~/Android/Sdk so might as well put it there. The version of Gradle included in PATH below is that used in TM352-17J: modify for a later version.

Open the text editor and do File > Open, Right Click and check Show Hidden Files. Open the file .bashrc and add the following lines at the bottom. Alternatively entering: editor-name ~/.bashrc in a terminal should open the file directly.

# added for Cordova installexport ANDROID_HOME=$HOME/Android/Sdkexport PATH=$JAVA_HOME/bin:$ANDROID_HOME/tools/bin:$ANDROID_HOME/platform-tools:$HOME/Gradle/gradle-4.4.1/bin:$PATH

Save the file.

To register JAVA_HOME the system must be restarted but before doing so you may wish to save a copy of the installation so far from the terminal window to a file: Edit > Select All, Ctrl+Shift+C and paste into a text editor.

Install Android SDK

Open a terminal window and create the directory for the SDK

martin@mint18 ~$mkdir Android/Sdk Download the Linux sdk-tools .zip from https://developer.android.com/studio/#downloads (from the 'Command line tools only' section) to Android/Sdk. Extract the zip’s tools folder into this directory. Tools for creating Android apps It is essential that ANDROID_HOME is set correctly here or sdkmanager will put files in the wrong folder. To check it is correct echo$ANDROID_HOME
Optionally, to avoid a warning enter
martin@mint18 ~$mkdir .android && touch .android/repositories.cfg Copy the following, then paste and enter at the command prompt sdkmanager "platform-tools" "platforms;android-26" "build-tools;26.0.3" "system-images;android-26;google_apis;x86" "extras;android;m2repository" "extras;google;m2repository" Accept licence agreements. Versions are those used in TM252-17J: check they meet your requirements. There should now be a new folder ‘cache’ in /home/martin/.android/ containing many .xml scripts and several new folders in ANDROID_HOME including build-tools. The file repositories.cfg remains empty indicating this file and any warning generated if it does not exist are probably spurious. Install Gradle martin@mint18 ~$ mkdir Gradle

Check installation
To check that all this worked a Cordova project must be created
martin@mint18 ~\$ cordova create Test_cordovacd Test_cordovacordova platforms add androidcordova requirements

The result should be

Android Studio project detectedRequirements check results for android:Java JDK: installed 1.8.0Android SDK: installed trueAndroid target: installed android-26Gradle: installed /home/martin/Gradle/gradle-4.4.1/bin/gradle

If an error is seen check the relevant system variable is correct and the names of files match. Otherwise installation is complete.

JAVA_HOME could be defined in ~/.bashrc using export as for ANDROID_HOME and PATH saving the complication of text editor root privileges. Java is available to all applications so it is good practice to put the definition in /etc/environment. If additional software, Android Studio for example, is installed Cordova will continue to work even if JAVA_HOME is modified for some reason.

Share post

## Python 1: How to use a class as a table

Visible to anyone in the world
Edited by Martin Thomas Humby, Wednesday, 31 Oct 2018, 23:07

There is a simple and perhaps obvious relationship between a class of objects, a database table and the table represented as a two dimensional array in a programming language: the class is the table and every instance of the class forms a row in that table. Using Python class to array conversion is equally simple.

Basic Python does not provide multi-dimensional arrays in the C / pascal static-array sense but goes straight to the more flexible but less computationally efficient lists of lists. The Numpy extension does support these types directly so, to keep things simple, it is convenient to use Numpy. A two dimensional can be declared as follows:

import numpy as nparray = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])

The numpy.ndarray returned by this call displays itself as

array([[1, 2, 3],       [4, 5, 6],       [7, 8, 9]])

np.array() iterates the Python list of lists to get this result.

Another extension, Pandas, built on Numpy, moves closer to a database table by supplying an index of column names and an index of rows:

import pandas as pdseries3 = pd.DataFrame(array, columns=['from_1', 'from_2', 'from_3'])

To allow a list of objects to be converted to a Numpy array or Pandas DataFrame, objects need an __iter__() method that returns each desired attribute of the object replacing the sublists in the abpve example. Optionally a __str__() method to supply a string representation when an instance is seen in a column and a class  method columns() that returns a list of column names can also be provided:

class Person:    def __init__(self, name, age, notes):        self.name = name        self.age = age        self.notes = notes900    def __iter__(self):        yield self        yield self.name        yield self.age        yield self.notes            def hello(self):        return "Hello! I'm " + self.name + '. ' + self.notes    def __str__(self):        return 'Person: ' + self.name    @classmethod    def columns(cls):        return['Items', 'name', 'age', 'notes']

Now we need some kind of storage class to hold the table. Thinking that we might want to select items from the structure by name a dictionary suits:

class Struct:
def __init__(self):
self.facet = {}

self.facet[item.name] = item

@property
def items(self):
return pd.DataFrame(list(self.facet.values()), columns=Person.columns())

Putting it together:

people = Struct()people.add_to_facet(Person('Jim', 42, 'The time has come for all good men to come to the aid of the party'))people.add_to_facet(Person('Jane', 24, 'She sells sea shells on the sea shore'))people.add_to_facet(Person('Jill', 17, 'The quick brown fox jumps over the lazy dog'))print(people.items)

Items are objects:

df = people.items

obj = df.Items[0]

print(obj.hello())

"Hello! I'm Jill. The quick brown fox jumps over the lazy dog"

So, that is a basic structure with one facet. In a real application with multiple facets represented by multiple dictionaries it may be useful to be able to select the object attributes returned to build a table on the fly:

# modify the DataFrame columns returned by observationsdef observations_columns(self, *fields, columns=0):    self._observations_df = None    s = 'def f(self):pass'    for fld in fields:        if fld == 'observation' or fld == '' or fld == 'self':            s += ';yield self'        else:            s += ';yield self.' + fld    s += '\nObservation.__iter__=f'    exec(s)    if columns != 0:    # allow for None columns        Observation._cols = columns     else:        names = list(f)        for i, name in enumerate(names):            names[i] = names[i].replace('.', '_')        Observation._cols = names

Without the complications from joining tables, intermediate tables to represent variable content fields or combining documents, the power of OOP allows attributes to be selected from the immediate object or from other objects referenced from it down to any level.

Share post
Attachments:

## Learn to code for data analysis

Visible to anyone in the world
Edited by Martin Thomas Humby, Friday, 20 Nov 2015, 16:02

Just started late on this FutureLearn OU course and this is a convenient place to post the Week 1 Project.

Hoping to get an intro to TM351 from this course. See how it goes, quite interesting so far. Only problem the course software won't run under XP. A Linux on the SSD where the page file currently sits might be a no cost solution.

Share post
Attachments:

## Algorithms 14: Subtype polymorphism and encapsulation

Visible to anyone in the world
Edited by Martin Thomas Humby, Wednesday, 13 Dec 2023, 20:21

In Algorithms 11 binding methods to an instance of a class getting dynamic dispatch was demonstrated: calling a method against an instance at a reference calls the method belonging to that object. Algorithms 12 showed class inheritance: inheritance of method names together with the functionality of those methods.

Together these two attributes provide subtype polymorphism: the ability to view an instance of a subclass as an instance of its super-class. Subtype polymorphism is enforced in statically typed languages where calling a method unknown to the nominal class of a reference (or by-value variable) gets a compile-time error. Python provides more or less total flexibility, a variable can be viewed as any type, but compliance with the framework supplied by static typing can provide a basis that is known to work.

The utility of class inheritance for building new classes from existing classes with added or specialised functionality is obvious: it saves work and reduces code size together with associated complexity. Copy and paste could be used to get a new type but using inheritance the total amount of code is reduced. In Algorithms 12 a Stack was turned into an extendable structure, a List, simply by replacing __init__() and push() with new methods and introducing a helper method _check_capacity().

Replacing an inherited method is known as overriding the method. To maintain polymorphism the parameter types and return type of the new method must be the same as those of the overridden method or steps taken to ensure the new method still works when presented with the same type of arguments as the method being replaced. Stack.__init__() for example, was designed to accept an integer, the maximum size of a stack, so List's __init__() should also work when passed an integer as well as an Iterable. (Iterable is the base class of List and Python's list, so a new List can be constructed from either of these types).

A function's name, parameters and return type comprise its signature. Many statically typed languages allow use of functions with the same name but different signatures, known as overloaded functions or methods. From the compiler's viewpoint these are separate functions. This can cause problems in some languages when pre-compiled libraries are used because the overloaded function names exposed by the library have been changed by the compiler, a process known as name mangling.

Many languages allow functions to be called without their return being assigned to a variable. In this case the expected return type will be unknown to the compiler so the signature used for overloading excludes the return type. When overloading is available execution efficiency will suffer if the type of a parameter is chosen to be that of a mutual ancestor with runtime type checking used to determine the actual type rather than writing separate overloaded methods for the different types.

Pythons dispatch mechanism uses the method name to determine which method to call. Parameter types do not feature in this selection excluding overloading. Any method with the same name in a subclass effectively overrides the superclass method but complete flexibility as to the type of parameters passed to a function can replace overloading. Runtime type checking or named default parameters are used to determine how parameters of different types are processed. Named defaults are considered to be more 'Pythonic' but are less transparent in use.

Encapsulation

Coding abstract data types without OOP in languages such as UCSD Pascal, Ada83 and Delphi there can be complete separation of operations appearing in the interface from the workings hidden in the implementation. Anything declared in the implementation, perhaps located in a separate file is private and hidden in entirety from client code.

Interface / implementation separation has become known as information hiding, a term originating from the design strategy proposed by David Parnas (Parnas 1972). In OOP such separation is provided by encapsulation but the facilities available to achieve it vary from one language to another. In general encapsulation means grouping the methods that operate on data with the data rather than passing data objects with no inherent functionality to a hierarchy of possibly shared subroutines.

OOP languages tend to declare all fields and methods in an interface file or section hiding only method implementations or feature no physical separation of code the entire class appearing in a single file. A typical C++ program for example will define a class in a .hpp header file and put method implementations in a separate .cpp file.

In C++ any class that is not defined as abstract can be instantiated by value simply by declaring a variable of the class. In this case the compiler needs access to the total size of fields so sufficient space can be allocated to accommodate the instance. When all instances are by reference as in Java etc. this is not a requirement. All references have the same size in memory and the actual allocation is done at runtime by object creation. Even so, Java and many other languages still feature all-in-one-place class definitions.

To overcome implementation exposure two strategies have been adopted: inheritance from a pure abstract class or interface type without any fields featuring only method definitions and secondly, key words defining levels of exposure for the members of a class (its methods and fields), generally private, protected and public.

With inheritance from an abstract class descendant instances can be assigned to a reference of the abstract type and viewed as such. The descendant supports the interface defined by the superclass. Languages that do not provide multiple inheritance generally define an interface type and a class can support multiple interfaces. The actual implementation can be ignored by client code and can be made reasonably inaccessible through the reference providing interface / implementation separation. This comes at the cost of pre-definition of the abstract type and perhaps an additional execution overhead resulting from multiple inheritance.

Beyond the separation of the interface as a set of public methods, other levels of exposure only relate to re-use of functionality through inheritance. Fields and methods defined as private are hidden from client code and in subclasses. Those defined as protected are inaccessible to client code but are visible in a subclass implementation.

Making fields and methods private imposes restrictions on the options available when writing a subclass. For example, a base class Person might have a private field date_of_birth. If modification to this birthdate after instantiation is a requirement a public method set_birthdate() is needed. A subclass, Employee say, must either use the inherited setter method or override it perhaps to ensure that no person below a certain age can be employed. An overridden setter cannot access the private field but calls the inherited setter to modify the birth date. This minimum age requirement is then imposed on all subclasses of Employee, not so if date_of_birth had been give protected exposure.

Sounds logical but such a lack of flexibility can cause problems when the only access a descendant of employee can have to the age field is through the inherited setter.  Suppose at a later time a new class Trainee is required with a lower age limit. Due to the imposed limit no Trainee can be an Employee and cannot inherit or take advantage of any functionality already written in relation to employees. Reducing the limit for Employee will likely break existing age verification.

Another consideration may be the additional overhead of methods that call other methods, not good when execution efficiency is in any way critical. To overcome these problems Java provides a fourth level of exposure known as 'package-private', the default when no other is specified. All Java classes exist in a package, a grouping of class files. Package-private members of are visible in the implementation of all other classes in the same package but inaccessible to classes located in other packages.

Package-private exposure finds widespread use in supplied Java class libraries and many other languages provide an equivalent. Using it as an alternative to protected exposure is all very well if it is possible to extend a package, not a possibility for these Java libraries. Consequently minor modifications to the functionality of a HashMap say, with a requirement for equivalent execution efficiency means starting almost from scratch. The only useful library definition is the Map interface.

Coding descendants there is really very little to be gained from these restrictions and making the implementation completely inaccessible is not possible in Python. Protected and private categorization is available by a naming convention rather than error generation and more reliance is placed on organization and coding by thinking. Similar measures are available for structuring the code to provide modularity.

Python information hiding

The smallest increment of Python functionality is a class. Classes can contain nested classes but putting a class in a nested location has no particular significance other than an indication of associated usage. For an inner instance to have knowledge of an enclosing object it must be given a reference to it:

class Outer(object):
class Nested:
def __init__(self, outer):
self._outer = outer
...

def __init__(self):
self._inner = Outer.Nested(self)
...

Python provides no direct equivalent of private and protected exposure but a convention that variable and helper method names starting with an underscore are protected and should not be accessed directly by client code is useful. A leading double underscore supplies some equivalence to private members but this provision is not required in most cases.

Constants exposed by a module can be put in all upper case to indicate their status but to get a constant that is immutable as far as possible it must be made a member of a class accessible through a property. This option is shown in the download package1.module2 but going to these lengths is not what Python is about, I think.

Python, Delphi, C# and some dialects of C++, provide so called properties, a construct that replaces public getter and setter methods generally to control or exclude modification of the value of private or protected fields by client code. The property is seen as a virtual field so we can write x = aproperty and when write modification has been programmed aproperty = x, both without function-call brackets or arguments.

Because all members are accessed by name in Python its properties can supply validation of submitted values in a subclass even if the base class provides direct access. A property with the name of the base class field replaces it. A number of different syntax and variations are available but typical usage might be:

class Base:
def __init__(self, value):
self.value = value     # value is a fully accessible field
def halve(self):
self.value /= 2
class Subclass(Base):
def _get_value(self):
return self.value
def _set_value(self, value):
if value >= 0 and value < 100:
self.value = value

val = property(_get_value, _set_value)

instance = Subclass(99)
print(instance.val)
instance.halve()
print(instance.val)

In the Subclass the property value replaces the base class field and all inherited methods now address the field _value through the setter and getter functions. The base class field value does not exist in the subclass so there is no wasted allocation.

In Subclass the field name _value could have been given a double underscore indicating private status. Doing this invokes name-mangling and __value becomes an alias for the mangled name in the implementation. The mangled name, which always follows the pattern _Base__value or generically _Classname__membername, is visible in subclasses and client code as well as the current implementation. Such members are not truly private, simply less accessible to erroneous use.

Double underscore method names can be used to get the same effect as the static instance methods available in C++ etc.  When a dynamically dispatched method is overridden this directs all calls to the new method, very likely an unwanted side effect when the method is a helper used by multiple other inherited methods. Replacing a static method in a subclass, same name, same signature, this does not occur and any calls in inherited methods still go to the superclass method.

The two alternatives are contrasted by mod_list.py and  mod_list_static.py in the download. An overridden _check_capacity() method and a 'static' equivalent __check_capacity()are shown as respective alternatives. The replaced method is used in a subclass by insert_at() but not push(), which must continue to use the inherited version.

Python modules and packages

A Python file or 'module' with a .py extension can contain constants, variables, functions and class definitions. In larger programs modules supply the next increment of modularity above classes. Lest we forget which module is being used, within a module its canonical name is available through the global variable __name__.  However, when a module is run as a program __name__ is set to '__main__' getting the option of excluding minor test code from module initialization by enclosing it in an if statement. Variability of __name__  is demonstrated in package1.module1 together with the options for importing modules contained in a package as shown below.

The same naming conventions using underscores to indicate modules that are only for use by modules in the same package can be employed but a prepended double underscore does not result in name-mangling.

Packages allow modules to be grouped according to the type of functionality they expose or the usage they have within an application. They can be nested to any depth: package1.sub_package.subsub_package.subsub_module. When importing a module no account is taken of the location of the importing module in the package hierarchy and the full canonical path must be used.

If package1 contains two modules module1 and module2, to import a function module2.where() to module1 there are three options:

import package1.module2                           # import option 1
from package1 import module2                     # import option 2 (recommended)
from package1.module2 import where as where2     # import option 3

def where():
return __name__

def elsewhere():
result = package1.module2.where()   # using import option 1
result = module2.where()            # using import option 2
result = where2()                   # using import option 3
return result

if __name__ == '__main__':
print(where())
print(elsewhere())

To create a new package in PyCharm, in the project tree right-click on the project name or the package under which the new package is to be created and select new > Python Package. PyCharm creates a module __init__ within the new package. The purpose of this module is to code any package initialization not done by constituent modules. When __init__ does not declare an __all__ list of module names, any modules imported to it from the current package or elsewhere are made visible in any module using the statement:

from package import *

When the alternative, declaring a list of modules in __init__ to be imported by import *  is used, __all__ = ['module1', 'module2'] for example, this list can contain only modules in the immediate package. Also its presence excludes access to the modules in __init__'s import statements as described above.

Note that circular imports are not allowed. If module1 imports module2 then module2 cannot import  module1. It may be possible to circumvent this rule by importing individual items, as in import option 3 above and module2's equivalent of module1's elsewhere() in the download, but be aware of possible problems: type checking failing to recognize an imported class for example.

Summary

Interface / implementation separation isolates client code from changes to an implementation. It can also provide levels of abstraction, used to mitigate the complexity inherent to large software systems. A user interface for example, can hide its implementation and the various subsystems that support it. Similarly each subsystem can hide its own implementation from code in the user interface. This layering of functionality can continue down through as many levels as may be required following the design strategy of information hiding.

OOP languages provide a level of interface, implementation separation through encapsulation. Subtype polymorphism used with abstract or interface classes can strengthen encapsulation. If sufficient pre-planning is put in place, provision of read-only interfaces for example, object integrity can be ensured.

Python relies less on enforced encapsulation than on code written with the precepts of information hiding in mind. Python's call-by-name has two effects here: any class supplies a conceptual interface that can be supported by any other class supplying the same function names and exposed functionality; direct access to fields in a base class can be overridden using properties.

Import statements within the __init__ module can be used to create virtual groupings exposing modules and/or classes from other packages. This facility suggests the option of creating multiple groupings by type of functionality and by usage within an application, for example.

Reference

Parnas, D. L. (1972) 'On the Criteria To Be Used in Decomposing Systems into Modules', Communications of the ACM, vol 15 no. 12 [Online]. Available at https://www.cs.umd.edu/class/spring2003/cmsc838p/Design/criteria.pdf (Accessed 22 September 2015)

[Algorithms 14: Subtype polymorphism and encapsulation (c) Martin Humby 2015]

Share post
Attachments:

## Algorithms 13: Using Stack – recursion

Visible to anyone in the world
Edited by Martin Thomas Humby, Tuesday, 22 Sept 2015, 17:23

The most fundamental use of stacks relates to how functions are called. One function can call another function which can call a third and so on to any depth until there is no more stack left to support further calls.

When a call is made one method of passing arguments to a function is to push the data onto the stack along with the address of the program statement execution must return to when the function has completed its work. This stack is referred to as the call stack or sometimes the machine stack. A specific CPU register, the stack pointer (SP), may be provided to manage it. In this case a call function instruction says: push the return onto the stack and jump to the function address, making that part of the pushing automatic.

Note that there are several differences between your average call stack and the stack implemented in Algorithms 10. In the majority of cases the call stack grows downwards in memory giving local variables a positive offset from SP. SP points to the last element put on the stack not the next free space so it is decremented down before data is pushed on. It is usual to discuss the depth of function calls meaning the number of calls made before a return starts the climb back up to calls higher in the calling hierarchy. An upside down stack fits well with this view:

When called the first action function code takes is to push the old value of another CPU register, the frame pointer (FP), onto the stack. The current value of SP is then stored in FP allowing modification of SP to make space on the stack for any additional local variables the function has.

On a return from a function the value of SP is restored from FP and FP gets its old value back, popped off the stack. Lastly, a return instruction in the machine code tells the CPU to send execution back to the instruction address stored on the stack by the call and currently referenced by SP. Thus FP and SP work together to walk up and down the stack.

The blocks of memory on the call stack for each individual function call are stack frames.  Function local variables that cannot be held in registers because they are too big to fit or because those registers may be used by the callee when that function calls another function are put on the stack. Large return items can be passed back on the stack but more usually the return or a reference to it is passed in a CPU register. Thus the availability of this storage space makes nested function calls possible.

Recursion

The ability to call a function from within another function allows a strategy called recursion to be used. In this scenario a function takes some action to reduce the complexity of a problem and then calls itself until the problem becomes simple enough to solve without further calls.

If a large list of items is to be sorted for example, sorting the entire list is going to involve a lot comparisons and moving items around. Sorting half the list will be easier so we can code a function that passes half the list to another version of itself that in turn halves the list and passes it on to a third version. Eventually a version will get a list containing one or maybe no items. Such lists can be deemed already sorted in both cases. Unfortunately this is not the full story and the final episode will have to keep until we look at sorting in detail.

A simpler problem is to compute the factorial value of a number. Factorial , represented as  is the number of different ways  items can be arranged in a sequence of the items, just the ticket for working out our chances of winning the next offering of Lotto.

One way of thinking of this is to say we have  choices for the first item to put into a sequence,  choices for the second and so on until we get a single choice for the last. If we start by creating  sequences then each of the original sequences clones into  sequences to accommodate the  choices available at the next choosing. Each of these  sequences then clones again to satisfy each of the  choices remaining, and so on.

To get the total of possible sequences,  needs to be multiplied by the product of all the other choices for each choosing down to the last. In other words

def factorial(n):
if n > 1:
return n *  factorial(n - 1)
else:
return 1    # for the last choice

This function returns the correct value for 0! unity that is, and if by some horrible breach of the precondition for factorials n is negative recursion never starts.

The if statement divides actions to be taken into two cases. The n > 1 case is the recursive case: the problem requires further simplification. The  else: case is the base case where further simplification is no longer needed and it is time to work out a value and return it, not too hard for factorial. A recursive function must include both cases and progress towards the base case must be guaranteed.

Emulating the call stack

To see the call stack in action open the file recursive.py from the download in PyCharm and change return to return 1/0 in factorial() to get a divide by zero error. Run the file and output shows the error in line 7, the base case, and the four previous calls for the recursive case in line 5. To display line numbers at any time right click on the left grey border to the edit pane and select the option from the context menu. This display shows how the stack was used to record lines in the code where a call was made and therefore the location after the call where execution must jump back to on a return to process the returned value.

The values of n stored in each stack frame can be seen using PyCharm's debugger. Delete the error generator and left click the grey left border at line 4 to set a breakpoint on the if statement: a red circle appears and the line is highlighted. Right click in the editor pane and select Debug 'recursive' from the context menu. Single step through the program using the F8 key or by clicking the two blue circles or blue down arrow icon on the top border of the Debug pane. Note the decreasing values of n on the way down and the increasing values stored in each stack frame on the way back up. When execution reaches the print() statement press F9 to finish.

Note that you can stop any program running in PyCharm using Ctrl + F2 or by clicking the red square in a Debug or Run pane's left border.

Another view of values on the call stack can be got from running print_view.py. This version uses print statements to log the value of n. Together the debugger and print statements provide alternatives for revealing program behaviour. For certain situations print statements provide a better solution and by viewing console output while single stepping a program the two alternatives can be used together.

Recursion may provide a simplified view of some problems but in essence all it does is to allow the call stack to be used for storing and retrieving a sequence of values in LIFO order. It should therefore be possible to use a stack to achieve the same result. Using the stack ADT from Algorithms 10, factorial() can be written non-recursively:

from stack import stack as Stack

def stk_factorial(n):
stk = Stack(n-1)    # create a stack for n-1 elements
while n > 1:
stk.push(n)
n -= 1          # decrement n by 1

result = 1
while not stk.is_empty():
n = stk.pop()
result *= n     # multiply result by n
return result

Tail recursion

When a recursive function does all its work on the way down so that no additional computation is required after the recursive call it is tail recursive: the recursive call being last before the return - in the tail. In many cases a tail recursive function can be modified to do all computation within a loop and recursion is not required.

The function factorial(n) above is not tail recursive: the return from the recursive call is multiplied by n after it is received. To make it so another parameter is needed to pass  forward the result of computations done so far:

def tr_factorial(n, result=1):
if n > 1:
return tr_factorial(n - 1, n * result)
else:
return result

Converting this version to a loop gets:

def lp_factorial(n):
result = 1
while n > 1:
result = n * result
n = n - 1
return result

Attempting full implementation within a loop is not always a viable solution. The Quicksort algorithm supplies an example. The list to be sorted is partitioned into two subdivisions: basically elements less than or greater than a partition value. In a fully recursive version there are two recursive calls one for each sub-partition. There are no returns so these calls could be classed as tail recursive. However, because there are two calls, while partitioning and sub-partitioning of left partitions say can be done in a loop, right partitions are best passed to a recursive call to avoid the complication of a local stack.

Summary

Use of a stack for subroutine linkage, local variables and arguments was first described by Alan Turing in his proposals for the Automatic Computing Engine (ACE) (Turing 1946). Possibly independently - this is the opinion of Donald Knuth (Knuth 1968), the stack concept was further developed by Edsger Dijkstra. ALGOL 60 provided recursion. Early FORTRAN did not use a stack for calling subroutines and recursion was not possible.

Recursion is a useful tool for problem solving in many cases, far beyond the simple examples discussed here. It can be viewed as a special case of decomposition: breaking a problem down into more manageable parts. It is also a special case of iteration.

Consider a problem that could be solved using nested loops but the depth of nesting is unknown when it is coded. What the innermost loop or the base case does is plain to see but the number of outer loops required to get there can vary from one instance of a structure to another. Quicksort is one such problem: the depth of recursion depends not only on the size of the list but also on its composition both of which can vary for different lists.

A similar problem, solved using recursion, can be found in the _getstr() method  in array_types.py from the download. This method constructs a string representation of an array and the function must be able to deal with any number of dimensions of any size. The loop in the else: recursive case calls _getstr() to generate the required number of additional loops at the next level of nesting. For the 3 x 3 x 3 array from Algorithms 8, the innermost loop in the base case will add element strings for one of the z direction vectors. The recursive case in the previous call iterates over the vectors in the current row while the call before that iterates rows. Rows, vectors, hyper-vectors, all are the same to the recursive case.

Another use of a stack in program execution is to deal with exceptions. When an exception occurs execution is passed to code in the runtime environment to deal with it. This code manages a walk back up the call stack executing any finally: blocks until a handler for the exception is found. This process, which may involve stepping over some stack frames in entirety, is known as stack unwinding. Wikipedia (2015) describes various occurrences. While executing the finally blocks or a handler another exception may occur and so on for that exception. Progress with a previous exception is held on a stack while the current exception is processed. Similarly a stack in the form of a linked list can be used to record the location of finally blocks and except exception handlers encountered as functions are called.

To deal with lottery odds computations the functions nPm() and nCm() are provided in recursive.py. Neither of these functions is recursive as such but nCm calls recursive  factorial() and finds the odds of a 6 numbers win to be 1 in 13,983,816.

References

Knuth, D. (1968) The Art of computer programming, vol I, Fundamental algorithms, Section 2.6, Addison-Wesley

Turing, A. M. (1946) Proposals for the Development in the Mathematics Division of an Automatic Computing Engine (ACE), in Collected Works of A. M. Turing, MECHANICAL INTELLIGENCE,  Ince, D. C. Open University, (edd. 1992), Elsevier Science Publishers

Wikipedia (2015) Call stack [Online], 24 April 2015. Available at https://en.wikipedia.org/wiki/Call_stack#Unwinding (Accessed 7 August 2015).

[Algorithms 13: Using Stack - recursion (c) Martin Humby 2015]

Share post
Attachments:

## Algorithms 12: Basic OOP – a List type

Visible to anyone in the world
Edited by Martin Thomas Humby, Friday, 4 Sept 2015, 13:39

Because they are stored in a single contiguous block of memory arrays are not extendable. Memory is allocated to an exact fit, nothing more. This characteristic was reflected by the maximum size of a Stack. Now the requirement is for a List type. As we are coding the List ourselves we can give it any functionality we like including unlimited increase in size and the ability to add items not just at the end but to insert an item anywhere along a List's length.

Similarly, the requirements for List can dictate that when an item is removed at any index the list will close up to fill the gap. Additionally, List can be given a method to add a guard value at the end of the list for use when the List is searched .

Ignoring these new requirements List has several characteristics in common with Stack and one way to make a start on coding List is to copy Stack's code, paste it into a new file or the same file and rename Stack to List. OOP provides an alternative that brings some advantages: inheritance. Using inheritance all that needs to be done to get Stacks fields and functionality into List is to include Stack in the class declaration:

from my_OOP_stack import stack as Stack
class List(Stack):
pass

Readers will note I have imported stack as Stack. The Python convention for class names is to use CamelCase but when it started out stack was not a class as such, simply the next best thing to a record. A short breakdown of Python naming conventions can be found here. The keyword pass allows a class or function to be defined with an empty body.

If you would like to follow the incremental development of List, open the List Investigations project from the download in PyCharm. Create a new file list.py and copy the above code into it. There is a copy of my_OOP_stack_test.py in the project with the import statement changed to: from list import List as stack. Without any doubt the test code in this file views a List as a stack. Run the test script and the output is identical to that got in Algorithms 11 when it was run with a genuine stack.

This is not surprising. Currently List has no functionality or fields other than those it has inherited from stack. Even so, it demonstrates three facts: List has inherited stack's interface - if it hadn't calling stack's methods would get an error. Similarly it has inherited stack functionality - if it hadn't output would differ or an error would result (despite the test script being far from comprehensive please take my word for it). Together these two points comprise what is sometimes known as class inheritance (as opposed to pure interface inheritance, to be dealt with in a later article).

The third fact is that List can be viewed as a stackList is a subclass, a subtype, a child class or a descendant of stackStack is a superclass, parent or ancestor of List. The ability to view a subclass as a superclass is known as subtype polymorphism.

In some languages a view may become a reality. C++ features class instances by reference and by value. When a by-value instance is assigned to a variable of the same class all its fields are copied getting a new instance. If it is assigned to a superclass variable any additional fields it has are truncated off getting an instance of the superclass.

In languages that use static typing, assigning a reference to a subclass to a variable declared to be a reference a superclass is allowed. In Python assigning to a reference viewed as a superclass has the same effect.

Assignment super to subclass is not allowed under static typing: the subclass may have declared additional methods that do not exist in the superclass and in turn these methods may access fields that the superclass does not possess. In Python you can do what you like but errors are likely if this rule is violated.

Redefining push( )

Because push() adds items to Stack's array it has the capability of causing an overflow. The answer is simple: when the array is full allocate a bigger one and copy the items from the old array to the new one. 'But doesn't that take a long time to do?' you may wonder.

Well yes, doing this for a medium to large array coded to show how it works in Python is quite slow. Python features list slicing getting an equivalent to C's memcpy() but currently that will not be used. When coded in C or Delphi Pascal or assembly language for example, copying is much faster. Over time CPUs have been optimized to do various tasks and copying blocks of memory is one of them.

Another point, we are not copying actual items only references to them, considerably smaller than a record with even a few fields. Faced with a really big sequence other techniques would have to be considered. Adding a new array linked to the old one is one possibility but for small to medium arrays copying is fine and most likely more efficient.

Copying arrays

Copying will also be needed to modify the list when an item is removed or inserted so, to avoid code duplication, functions to do the copying can be coded as utility functions in 'array_types' called by List methods. To prevent overwriting of items to be copied with those already copied two copying functions are needed:

The copy_left() function is simple. It is passed the source array, the destination array, the source index and the destination index together with the number of items to be copied. Starting at the lowest index in both arrays it iterates over the given section of the source copying items to the destination. If both the source and destination array are the same the element at the destination index is overwritten deleting that element.

copy_right() duplicates the element at a source index in the same array after moving other elements right to make room. The original element can be overwritten with an insertion. To avoid overwriting elements copying must start on the right and work leftwards. Coding it is slightly trickier, mostly because of the non-intuitive syntax of the range() generator particularly when counting down.

A generator comprises a special code construct, generally within a function. Each call gets the next value in a sequence of values.  The function call can be put in a for loop to obtain successive values one per iteration. Writing generators will be detailed at a later time.

Counting up with range() we get range(start_value, last_value + 1, increment). When there is a single argument, range(n)for example,  n is the number of integers generated including zero. A start_value is optional, default zero, but required if there are other arguments in addition to n.  The increment defaults to 1 when not included.

Counting down with range(), this time we get range(start_value, last_value - 1, increment) with start_value greater than or equal to last_value. The increment is of course negative.

The code for my copy_right() is:

def copy_right(source, s_index, dest, d_index, length):
sx = s_index + length - 1
dx = d_index + length - 1
for sx in range(sx, s_index - 1, -1):
dest[dx] = source[sx]
dx -= 1  # decrement destination index by 1

This function is in 'array_types.py' in the download with a small test script. Readers following incremental development are invited to code a similar copy_left() function in array_types, otherwise the code is in 'array_types.py' in the 'solutions' package.

To complete the redefinition of push() an instance variable capacity or some such will be needed to hold the current size of the array. The revised version of __init__ shown below calls the superclass __init__ to set the array's initial size. Another option can be included to provide Python style typecasting of any sequence to the List type.

Type casts

In languages with static typing, compiled to native code, a typecast often means 'view the object at this variable not as the type of the variable but as some other type.' Generally there is a restriction that both types should have the same footprint in memory and perhaps a few more, but when an object is by reference all references have the same size.

(native code is machine code executed directly by the Central Processing Unit (CPU) and native to the target platform for an application. A platform is the environment an application runs in supplied by the combination of the CPU, the operating system and sometimes a Graphical User Interface (GUI))

In Python what may appear to be a typecast is in fact a call to a constructor. __init__ is called and initializes the new object based on the object passed in. For List we might have  a_list = List('abcd'). Adding code that includes this option to List.__init__ gets:

from array_types import copy_left, copy_right
from collections.abc import Iterable

DEFAULT_CAPACITY = 2    # use a low value   testing

class List(Stack):
def __init__(self, data):
if isinstance(data, Iterable) and len(data) > 0:
data_len = len(data)
self.capacity = data_len * 3 // 2
super().__init__(self.capacity)
copy_left(data, 0, self.array, 0, data_len)
self.SP = data_len
else:
if data == None or data < DEFAULT_CAPACITY:                self.capacity = DEFAULT_CAPACITY           else:                self.capacity = data            super().__init__(self.capacity)
self.max_size = sys.maxsize    # set inherited max_size to a very large number

Iterable is the common ancestor of all classes that can be iterated over in a for loop. It is the base class for these types. As yet List is not iterable so an instance cannot be 'cast' to a List to get a new copy. The guard element, for use in a linear search of the List, will be discussed in a later article.

The method that checks whether overflow of the array is imminent will be shared by Put() and an insert_at(index) method. It can be coded as a method _check_capacity():

def _check_capacity(self):
if self.SP < self.capacity - 1: # leave a spare element for a guard
return
self.capacity = self.capacity * 3 // 2
new_array = array_(self.capacity)
copy_left(self.array, 0, new_array, 0, self.SP)
self.array = new_array

The new array and the existing are separate so either of the copy functions could be used: use the simpler copy_left(). The method name starts with an underscore to indicate that it is a helper method belonging to List and not for public use by client code. I will endeavour to keep to this rule.  push(item) calls the method which will make space for the addition when required:

def push(self, item):
self._check_capacity()
super().push(item)

Here super().push(item) calls Stack's push to do the addition. If I was coding this for real, the original push is trivial and I probably would put its code inline rather than calling it but here the full possibilities of inheritance are used.

The methods insert_at(index) and delete_at(index) are left for interested readers to code with the proviso that the methods are included in the 'solutions' package version. Inherited pop() requires no modification unless we want to copy a reduced number of items to a smaller array, perhaps later, so the final task is to write a method to insert the guard value.

From the provision in check_capacity() there will always be sufficient room to accommodate a guard at the end of the array. A guard is not a permanent addition so, unlike insertions and deletions, adjusting the inherited stack pointer self.SP is not required:

    self.array[self.SP] = item


Summary

In this section we have seen how an array can be made extensible by copying and visited two more OOP basics : class inheritance and polymorphism.

Two different requirements for copying elements left or right within the same array were identified. The single memcpy() function or its equivalent seen in many languages computes which of the alternatives to use when source and destination overlap. An array_cpy() function is provided in the solutions package.

[Algorithms 12: Basic OOP - a List type (c) Martin Humby 2015]

Share post
Attachments:

## Algorithms 11: An OOP stack

Visible to anyone in the world
Edited by Martin Thomas Humby, Wednesday, 23 Sept 2015, 12:42

To turn the current stack implementation into an OOP one: working in PyCharm's project tree select 'stack.py'. Copy and then paste it back to Stacks with a name say, 'my_OOP_stack.py'. In the new file drag the cursor to select all the existing operations and press Tab once to move them within the record body. Then, with the functions still selected (complete lines required), press Ctrl + R. Find and Replace opens. In the top edit type stack and in the bottom edit self. Click [Replace all] and press Escape to close Find and Replace. The ADT stack is now an OOP class.

Moving the functions within the body has turned them into instance methods of stack. Generation of a new object using the class name as a function and the initialization provided by __init__ comprise a constructor for the class:

class stack:
def __init__(self, max_size):
self.array = array_(max_size)
self.SP = 0     # the stack pointer
self._max_size = max_size

# .........

def push(self, item):
self.array[self.SP] = item
self.SP += 1

def pop(self):
self.SP -= 1
return self.array[self.SP]

Instance methods are fields of an instance and can be accessed like any other field using dot notation:

a_stack = stack(6)

push_stack = a_stack.push  # push is a field of a_stack
push_stack('||')           # pushes '||' onto a_stack

This is not normal usage and simply demonstrates the field properties of instance methods.

Here OOP brings two advantages: once an instance is created its instance methods are bound to that instance and the problem that a method might belong another class does not arise. Secondly, this binding means we can call a method without passing the instance as an explicit argument:

a_stack.push('item')
top_item = a_stack.pop()
print('stack size:', a_stack.size(), '  top:', a_stack(top)
# etc ...

The instance is passed implicitly to the self variable in the method's parameter list.

Should we wish to do so we can obtain an free (unbound) reference to a method from the class and pass the relevant instance to self explicitly

stack.push(a_stack, '22')

As of right now I can't think of any circumstance where this would be required. It allows an instance of one class to be passed to an instance method of another but doing that sounds quite fraught.

Testing the OOP stack

To test your OOP stack open the file my_OOP_stack_test.py. If you did not name your OOP stack script as suggested above modify the import command to import your version. Have a look at the code: it uses try...except to handle exceptions when they occur. Run the file and compare the output with the location of print statements in the code. When the final item 666 is pushed onto the corrupted stack it is at index 4 not at index 0 where it should be.

All that remains is to fix the problem of stack corruption found in testing. In your my_OOP_stack.py modify push() and pop() to:

   def push(self, item):
assert self.size() < self._max_size, 'stack overflow'
self._array[self.SP] = item
self.SP += 1

def pop(self):
assert self.size() > 0, 'stack underflow'
self.SP -= 1
return self.array[self.SP]

The assertions enforce the preconditions of push() and pop() by raising an AssertionError exception when the precondition is not satisfied. Note that assert is an operator not a function. Enclosing both its arguments with brackets will render the assertion always true. my_OOP_stack_test.py handles the exceptions raised by the assertions allowing execution to continue normally. Pushing 666 places it at index 0 where it should be.

Summary

In this section method binding to an instance was investigated, an aspect of OOP. A bound method was assigned to a variable and called. The more usual procedure of calling a method against an instance of a class with dot notation was used in the stack test script.

When a method is called in this way with the call directed to a method bound to the object currently referenced by a variable, the mechanism involved is known as dynamic dispatch. Languages like C++ that use static typing may also supply static dispatch: the method called depends on the declared type of the variable. Clearly this cannot be an alternative in Python where the type of an object is determined at runtime.

There are several flavours of dynamic dispatch: single dispatch as above, double dispatch and multiple dispatch where one or more argument types also figure in deciding the method the call is directed to. Python does not feature multiple dispatch out of the box but looks suited to implementing it fairly transparently. It uses late binding looking up methods by name in a dictionary subject to runtime modification when permitted by the class.

It seems there are also flavours of late binding: the Wikipedia article (Wikipedia 2015) notes use of a v-table as an example of early binding. This is the mechanism used by C++, noted in the same article as providing late binding even though the content and location of the v-table storing the addresses of dynamically dispatched methods is are computed at compile time.

The assertions introduced after testing prevented stack overflow and underflow when the preconditions of the stack operations push() and pop() were violated. Generally assertions are only used for debugging purposes and more permanent ways of dealing with this kind of problem will be discussed in a later article.

Algorithms 12 will investigate some more basic OOP features including inheritance and polymorphism.

Reference

Wikipedia (2015) Late Binding [Online], 1 August 2015. Available at https://en.wikipedia.org/wiki/Late_binding (Accessed 1 August 2015).

[Algorithms 11: An OOP stack  (c) Martin Humby 2015]

Share post
Attachments:

## Algorithms 10: Abstract data types – Stack

Visible to anyone in the world
Edited by Martin Thomas Humby, Monday, 25 Jan 2016, 16:49

Functions hide their workings in an implementation. Data types can be constructed with the same idea in mind. Functions implementing operations on a structure supply an interface for use by client code: other functions that use the structure, that is. Variables used to build the structure and any helper functions provide a separate implementation. For a structure developed using non-OOP techniques, in Ada83 for example, complete separation of the interface and implementation is possible by putting the code in separate files.

OOP languages tend to rely less on physical separation using key words such as public to define parts of the interface with private and protected indicating elements of the implementation. OOP has reinvented separation of interface and implementation by calling it encapsulation: a class exposes public methods and encapsulates the rest.

There are two main advantages to this approach: implementation details can be ignored when using the data type and different implementations of a structure can be substituted without affecting the operations seen in the interface by client code. Additionally, in some compiled languages a unit or package containing the implementation can be modified and compiled separately without the requirement to recompile client code. Only when the interface has changed is a rebuild required.

Perhaps a particular implementation works efficiently for a small number of stored items but does not scale well to a large number. Another implementation can accommodate a vast number of items but carries an overhead that makes it less efficient than an alternative when used with only a few. If one type can be substituted for the other without affecting client code, changes to the software will be minimized when it becomes apparent that a particular implementation is not up to the job. Time, cost, and a great deal of frustration and unproductive effort will be saved.

Viewed in this way a data structure is seen as an abstract data type (ADT). An ADT is defined by the operations that can be carried out on it, the pre and post conditions of those operations and the invariant condition of the ADT. Like the definition of a relation discussed in Algorithms 3, a particular implementation does not figure in an ADT definition. The basic operations on a stack of data items, for example, are to push an item onto the top of the stack and to pop an item off of it. Its invariant can be defined as its size always being greater than or equal to zero and less than or equal to a maximum size if one is specified.



Pre and postconditions

Some functions will only produce the required result if certain conditions pertain when the function is called. For stand alone functions that do not result in side effects on persistent data, such conditions can only relate to restrictions placed on the input set. For operations on a data structure the current state of the structure may also need to be taken into account. These prerequisites for calling a function are its preconditions.

After the function exits another set of conditions will apply. A statement of these postconditions is needed to check that a particular implementation of the function works according to requirements. For example, the pre and postconditions of a typical math square-root function defined in the style of VDM are:

SQRT(x:real)y:realpre x >=0post y*y = x and y >= 0

The input must be zero or positive; the output when multiplied by itself must equal the input and be greater than or equal to zero because sqrt() always returns the positive square root. Alternatively the relationship between pre and post conditions can be represented as a Hoare triple  where  and  are predicates and  is a program that terminates when it is run.

The Vienna Development Method (VDM) founded on Hoare Logic supplied a basis for the realization of these concepts in software design but various less formal specification languages are now available. OCL for example, an adjunct to the Unified Modelling Language (UML), allows the use of formal English to define conditions and invariants.

VDM, OCL etc. are typified as being formal methods.Their representation as Design by Contract has resulted in further deformalization although this may not have been the intention of Bertrand Meyer who coined the metaphor to represent the relationship between client code and code supplying a service.

After reviewing various formal and semi-formal definitions on the www in relation to a stack (there are many), model based VDM looked too cryptic, algebraic based definitions likewise and OCL appeared to lack precision. Perhaps formal English is the answer. Of course I am no expert in any of these approaches so my judgement is open to doubt. The idea I found most appealing used a series of operations in { } brackets (Wikipedia 2015). Applying this idea to a stack ADT, a possible set of operations on a could be formulated as:

Stack(max_size: integer): r: Stack
pre  true
post r = a new Stack instance
r.max_size = max_size
size(r) = 0
is_empty(r) = True
is_full(r) = False
invariant: size(r) >= 0

size(S: Stack): r: (0.. S.max_size)
pre  true
post r = j - k where {S = Stack(), j * push(S, i), k * pop(S), j >= k}
{the sum of valid pushes - pops since S was created}

is_empty(S: Stack): r: bool
pre  true
post r = (size(S) = 0)

is_full(S: Stack): r: bool
pre  true
post r = (size(S) = S.max_size)

top(S: Stack): r: Item
pre  not is_empty(s)
post r = i where {i = pop(S), S = push(S, i)}
{defines i, S after i has been popped off and pushed back on again}

push(S: Stack, i: Item): None
pre  not is_full(S) and n = size(S)
post top(S) = i and size(S) = n + 1

pop(S: Stack): r : Item
pre not is_empty(S) and i = top(S) and n = size(S)
post r = i and size(S) = n - 1

The precondition true specifies the operation is valid for any state of a stack: there are no preconditions. An ADT invariant specifies conditions that are True for any instance and remain True throughout its lifetime irrespective of the operations carried out on it.  It is therefore appropriate to place the invariant after the postconditions for initializing an instance.

With these operations in place a visualization of a stack with push and pop waiting to execute is as below:

When the item on the left is pushed onto the stack and pop() is called that item will be the one on the top of the stack and will be removed. This characteristic makes a stack a Last-In-First-Out structure, a LIFO, as opposed to a queue which features First-In-First-Out, a FIFO.

A traditional implementation of a stack without using any OOP techniques is best done in a separate file called 'stack.py'. The first step is to define a record to hold persistent data for the stack instance. By persistent I mean data that persists from one access of the stack to another (for data to persist from one run of a program to another it needs to be placed in backing storage i.e. put in a file saved to disk or some other medium).

An array to hold items in pushed order is required. A stack pointer points to the next free location in the array. When the array has zero based indexing, size and the value of stack-pointer are the same: only a stack pointer is needed. Maximum size is the size of the array. Using Python's record equivalent as before gets:

from array_types import array_

class stack:
def __init__(self, max_size):
self.array = array_(max_size)
self.SP = 0     # the stack pointer
self.max_size = max_size

Now define the stack operations:

def push(stack, item):
stack._array[stack.SP] = item
stack.SP += 1

def pop(stack):
stack.SP -= 1
return stack.array[stack.SP]
# etc.

A complete set of operations is in 'stack.py' from the download project in Stacks.zip. Test scripts are also included. There are a couple points to note from the tests, in part resulting from the characteristics of the underlying list type used to implement array_.

When there is an attempt to push an item onto a full stack this overflow produces an exception. The exception is raised before SP is incremented making the push() operation valid as far as it goes. However, when an underflow occurs from popping an empty stack, the stack pointer is decremented to -1 before the invalid removal is attempted. There is no exception but this is neither here nor there: the stack is already corrupted and will no longer function correctly. The value of the stack pointer SP is also the size of the stack and the ADT invariant is broken.

For those with a belief that design by contract is the answer to all life's problems this may be of no consequence: calling pop() on an empty stack is in clear violation of pop's precondition. This is not my belief and these problems will be dealt with a later article.

The second point is that when items are popped off the stack the stack-pointer is decremented but items are not actually removed. They will remain in the array until an item is pushed into the space they occupy. For a limited number of small objects this is not a problem but for a large structure replacing references to objects with a reference to None in pop() might be required. A test file for 'stack.py' is in 'stack_test.py'

Testing the stack

In a test file possible syntax to use stack operations is:

from stack import *       # import all stack's identifiers by name

a_stack = stack(6)
print('stack size:', size(a_stack), '  stack array:', a_stack.array)
push(a_stack, '||')
p = pop(a_stack)

print(empty(a_stack))

This is not at all bad and you could begin to wonder what OOP is all about. A problem might come up though if a queue was also required and someone had been foolish enough to implement removing an item from the front of the queue as pop(). Then with

from stack import *
from queue import *

queue's pop() would hide stack's pop() and the wrong function would get called.

A way around this problem is to import all queue's identifiers individually giving any that clash with stack's an alias:

from queue import join, pop as leave, .....

This alternative is open to error when there are many identifiers. A way that avoids any possibility of error is to import only the file name and prefix functions with it:

import stack
import queue

a_stack = stack.stack(6)
print('stack size:', stack.size(a_stack), '  stack array:', a_stack.array)
stack.push(a_stack, '||')
p = stack.pop(a_stack)

This alternative is acceptable for a few calls but not when there are pages of code with many references to one or more stacks, I think. Perhaps it is time to see what OOP has on offer.

Summary

Here we have looked at the characteristics of Abstract Data Types in general and in relation to a stack ADT. The concepts of preconditions, postconditions and class invariants were introduced. Testing the stack identified overflow and undeflow problems. There was also a problem when multiple imports were used one hiding the identifiers used in the other. Algorithms 11 will address these problems.

Reference

Wikipedia (2015) Abstract Data Type [Online], 4 July 2015. Available at https://en.wikipedia.org/wiki/Abstract_data_type (Accessed 26 July 2015).

[Algorithms 10: Abstract data types - Stack     (c) Martin Humby 2015]

Share post
Attachments:

## Algorithms 9: Records

Visible to anyone in the world
Edited by Martin Thomas Humby, Tuesday, 19 Jan 2016, 23:01

Again Python offers no exact equivalent of a record type and to get a feel for this primitive we must look elsewhere. A Pascal record definition for use in an address book might be

type   TPerson = record      given_name, other_name, family_name: string;      address: record                  house_number: integer;                  street, postcode: string;      email: TEmail;      phone: string;      mobile: string;   end;

Having defined this type instances can be declared in a Pascal program:

var   Johny: TPerson;   all_friends: array[0..current_friends] of TPerson;   next_to_invite: TPerson;

These variables are simply a place-makers: none of their fields have a value or at least not one that is any use. To access the fields and assign values the field name is used placed after a dot:

begin   Johny.given_name := 'Jonathan';   Johny.other_name := 'N/A'   Johny.family_name := 'Hansen'   Johny.address.house_number := 22   // etc.

This works both ways. Having assigned values they can be retrieved with this dot notation:

   next_to_invite := Johny;   invite_name :=  next_to_invite.given_name + ' ' +  next_to_invite.family_name;

The utility of records is now apparent. To store the different fields in separate arrays would need the same number of 'parallel' vectors as there are fields in the record. Reverting back to Python and assuming each array has been given the same name as a field:

from array_types import array_current_people = 50people_given_names = array_(current_people)people_other_names = array_(current_people)people_family_names = array_(current_people)people_address_house_number = array_(current_people)#   etc.......current_person = 0people_given_names[current_person] = 'Jonathan'people_other_names[current_person] = 'N/A'people_family_names[current_person] = 'Hansen'people_address_house_number[current_person] = 22#   etc.......next_to_invite = 0invite_name = people_given_names[next_to_invite] + ' ' + people_family_names[next_to_invite]print(invite_name)#   etc.......

Python records

The nearest type to a record Python has on offer is a class. Classes are an OOP construct and currently OOP will be avoided as far as possible. Understanding functions, variables referencing a function and records first, OOP requires only two additional very simple concepts.

The current problem is that in Python there are no free variables. When a variable is declared it must be bound to some object by assigning a value to it. Doing this also defines its current type or perhaps more accurately the variable itself is typeless and only the data it references has a type. In the Pascal example, a variable declaration got a free variable but type had to be specified and only objects of that type could be assigned subsequently.

Additionally, Python variables that belong to an instance of a class can only be declared by assigning a value to them in a function that belongs to the instance (an instance method in OOP terms). Before the method is called the variables do not exist. There is a special method called __init__(self) intended for initialization and called automatically when an instance is created so it is convenient to use this method. The variable self refers to the instance and must also precede the field names declared in the method:

class Address:    def __init__(self, house_number, street, postcode):        self.house_number = house_number        self.street = street        self.postcode = postcodeclass Person:    def __init__(self, given_name, family_name, other_name='',                 house_number='', street='', postcode='',                 email='', phone='', mobile=''):        self.given_name = given_name        self.other_name = other_name        self.family_name = family_name        self.address = Address(house_number, street, postcode)        self.email = email        self.phone = phone        self.mobile = mobile

The number of arguments __init__() is written to accept, either positionally or as named arguments, is of course optional. It could have no arguments (other than self) and initialize all fields to None or some other default. It is invoked by calling the class name as a function thereby creating and initializing an instance of the type:

a_person = Person('Jonathan', 'Hansen')

When all fields have been initialized to some value they can be updated and accessed exactly as for the Pascal record.

a_person.email = 'jhansen@concom.com'a_person.address.house_number = 22a_person.address.street= 'Nurdle Lee'print(a_person.given_name, a_person.family_name, a_person.address.house_number,      a_person.address.street, a_person.email)

Used in this way records can provide data structures of some complexity that are simplicity itself to create and access.

Summary

Records provide a means of accessing fields with diverse types by field-name. All fields exist in a single entity and the unwanted complication of holding these elements in parallel arrays is removed. The nearest Python record equivalent is in fact an instance of an OOP class having no instance methods other than  __init__(), required because Python does not feature free variables.

A Pascal record was used as an example of a basic record type. But there are a number of differences between a record in Pascal (or in C, C++, ALGOL) and a Python equivalent. Pascal records are by-value. A record and most other basic types exist at the declared variable. Initially values are null or undefined and must be given a value before they can be used. Assignment from one variable to another means copying all fields to the new location. A Pascal by-reference type must be declared and initialized as such. Not so in Python where all types are by-reference with transparent access as discussed above and in Algorithms 3 in connection with mutability.

In common with most programming languages Pascal features static typing the type of variables is declared in the code and checked by the compiler at compile time. With certain relaxations resulting from type casting and OOP inheritance, only objects of a variable's declared type can be assigned.

Python provides dynamic typing: the type and suitability of an object for processing by a particular function is checked at runtime when the code is executed. Objects of any type can be assigned to a variable and passed to any function but an error or unplanned results will be seen if an object is not of the intended type or types.

To demonstrate the simplicity of records I have put my money where my mouth is and implemented a simple address-book application that does indeed read records from a file on start-up and save any modifications on shut-down. Clearly some additional fields would be helpful in a real address-book but I have stuck with a few less than those shown previously. The code is in 'address_book.py' in the download.

Points of interest might be that here are no global variables and no side-effects modification of objects passed as arguments apart from the array of records used to store details of people in the book. This array can be seen as an abstract data type defined by indexing and assignment. There is no GUI and the application is menu driven, perhaps a reminder of those older simpler days. If you give the application a try do not forget to click in the Run pane to locate the cursor there. I always forget to do this and end up typing commands into the editor and messing up the script.

[Algorithms 9: Records (c) Martin Humby 2015]

Share post
Attachments:

## Algorithms 8: Arrays and records

Visible to anyone in the world
Edited by Martin Thomas Humby, Tuesday, 11 Aug 2015, 23:07

Arrays and records supply primitive data structures that can be used to construct other more complex types. As they stand these basic structures form part of several algorithms and make a good place to start.

An array is a monolithic sequence of elements usually all of the same type. Ordering of elements is sequential so they can be addressed by one or more indices that identify a particular element. It makes sense to to refer to the arrangement of elements as being in a line even when the line has a number of disjunctions to produce rectangular or other shapes. Always there are first and last elements and, for inner elements, next and previous elements. These properties make arrays one of a group known as linear data structures.

The elements of a record, generally referred to as its fields or members, may or may not be of the same type and are addressed through identifiers, typically a name. The arrangement of fields in a record can be seen as forming a tree with identifiers holding an offset from its base. This is a somewhat unusual view but illustrates the fact that records are not linear as such.

Arrays

Consider an array that has been initialized so that all its elements contain an integer value of zero as shown below. The indices to elements have been included for convenience and the array has been assigned to an identifier an_array making it accessible:

In many languages creating this array would mean that all its zeros were contained in a single contiguous block of memory with all bytes set to zero. Python does provide this kind of array with its array type initialized to hold integers but I want an array capable of holding any type including records and supplying the option of multiple dimensions. Arrays with these capabilities can be got from array_types.py in the Arrays.zip attachment.

Initializing one of these arrays with integers the result is as shown below:

The array comprises a contiguous block of references all pointing to the one and only int(0) object supplied by Python. This difference is of no great importance because integers and their like are immutable. One integer zero is as good as any other or indeed the same one.

Arrays are accessed by indexing into the array. To get the 10th element of a one dimensional array, also called a vector we write:

element_10 = an_array[9]

The variable element_10 now holds a reference to the same data as the tenth element of an_array, which may or may not be mutable. With the single proviso that modifying a mutable element can have external effects, an array can be viewed as holding its elements directly or holding references to those elements. The same is true of any variable.

Arrays may have one, two or more dimensions and boundaries that are not rectangular. Elements can form a triangle or a spiral arrangement for example:

In all cases there must be a unique mapping between the one or more indices and the elements of the array. A spiral type could map by [layer, element-in-layer]. Gaps in the elements mapped in the underlying array may be of no concern. The fact that index zero in a vector is not used is preferable to subtracting 1 from all indices when they are used with the array. Mapping the years 2000 to 2015 to array elements simply requires subtracting 2000 from submitted indices but the series 2000, 2005, 2010, .... indicates using a bit more arithmetic.

The array type in arrays.py is called array_ to distinguish it from Python's type. To create an instance of a 3 x 3 x 3 cubic array the syntax is

array_3D = array_(3, 3, 3, numbered=True) # number each element sequentially
array_3D[0, 0, 0] = 'first'
array_3D[2, 2, 2] = 'last'
print(array_3D[0, 0, 0], array_3D[2, 2, 2], '\n\n', array_3D) 

This syntax is typical of languages that provide multi dimensional monolithic arrays, Pascal for example. It is not the norm in Python because it does not provide these types out of the box. But no worries, the syntax for accessing elements of arrays with one dimension is identical in both cases. Python lists and arrays will be discussed at a later time.

To get the software into your equivalent of C:\Users\Martin\PycharmProjects\ALGORITHMS\Arrays and records follow a similar procedure to that given for Truth Tables in Algorithms 5:

Array visualization

Perhaps you might care to experiment by instantiating some larger arrays and printing them. Maybe assign a marker string to an element and see where it occurs in the printout. A few examples are in array_types.pi. To limit the number of different instances printed at any one time PyCharm provides a useful facility for inserting or removing # from lines of code so they are executed or not: highlight all or part of each line and press Ctrl +  /.

 A sequentially numbered 3 x 3 x 3 cubic array, like the one in the arrays selection shown previously, with a '#' assigned to element [1, 1, 1] prints as on the right: The top block shows the z direction vectors running back from the top row of the 'front' face and so on for the other two rows. In a 3 x 3 x 3 x 3 array there will be additional vectors running at right angles to the z direction. However, turning 90 degrees from z does not bring us back to x or y but into a mysterious 4th dimension normal to z. There are therefore 3 times as many vectors and the top block of a printout shows the contents of the vectors at right angles to the first z direction vector.  [0, 1, 2,   3, 4, 5,   6, 7, 8,   9, 10, 11,  12, #, 14,   15, 16, 17,    18, 19, 20,   21, 22, 23,   24, 25, 26]

Any array from a printout arranged as shown above (but with the # put in quotes) can be assigned to an array with the same number of elements:

an_array.assign([0, 1, 2, .... # etc.

Lookup tables

Arrays provide the fastest possible access to data that can be arranged sequentially on some ordinal. Ordinals are subsets of the ordinal numbers {0, 1, 2, ...} but other values can be mapped to them. The set of characters can be mapped directly to the ordinals. Strings cannot because they do not have a fixed length. While they can be placed in alphabetical order there can be any number of strings over an interval from 'A' to 'B', say.

Values that can be mapped to ordinals must have a least term and each subsequent term must have a fixed relationship with the previous term or terms. Generally,  where  is some function. Where mappings are not unique, the case of 2000, 2005, 2010, ... for example, the consequences must be known and taken into account when a value is looked-up in the array. A lookup of 2003 could generate an error, return the value for 2000 or alternatively for 2005 or some interpolation could be applied.

The kind of data that can benefit from storage and lookup in an array are values that cannot be computed because they result from external factors and values from long and complex calculations that are used frequently. Distances between towns on a map or locations on a PCB layout, for example, can be stored in a two dimensional array so that the most economical rout covering a number of points can be found. Integers can be mapped to town names. During computations the fact that town 3 is 'Ayr' is unimportant. To provide human readable output another lookup-table index to name can be generated.

Because the distance A to B is the same as B to A, a square array gives more than 50% redundancy. For an additional overhead when looking up a distance a triangular array can be used. As for any multidimensional array a function that will map indices in the multidimensional space to an index in the underlying vector of values is needed.

In two dimensions the requirement is for the number of elements in preceding rows plus elements in the current row. Zero based indexing simplifies the calculation getting row_index * row_size + column_index for a rectangular array. In a triangular array the size of each row increases:

def _compute_index(self, indices):  # indices is a tuple    row = indices[0]    column = indices[1]    if column > row:        row, column = column, row   # swap row with column    row_total = (row-1)* row // 2    return row_total + column

Referring to the distance map in the figure above PORTSMOUTH at index [0] has a row size of zero: a virtual row. Row sizes form an arithmetic series with sum number_of_terms * (first_term + last_term) / 2.  First term is zero but we want want the sum of preceding rows and because row [1] ABERDEEN maps to row zero in the underlying array subtract 1 from the row index to get number_of_terms. Because the multiplication is done first integer division // can be used to get an integer result.

Using arrays that are not rectilinear row sizes must conform to a sequence, not necessarily linear, but one that has a closed form - a formula giving the nth term, that can be summed.

Summary

Arrays provide a simple, efficient and easily applied data type accessed by indexing. The restriction they have relates to their fixed size but it is simple to write types based on an array that provide for addition and insertion of additional elements. How this is done will be revealed in Algorithms 12. Algorithms 10 looks at the records.

[Algorithms 8: Arrays and records (c) Martin Humby 2015]
Share post

## Algorithms 7: Logical reality

Visible to anyone in the world
Edited by Martin Thomas Humby, Thursday, 30 July 2015, 00:35

Here we look deeper into using the Boolean operators provided by Python and consider how the missing logical operation implication works with the other operators. An example of joining conditions with or might be

if files_still_loading or preferences_not_initialized or no_user_input:     sleep(5)

An example of joining with and might be

if no_workers_running and files_saved and not_cancelled:    exit_program()

These examples are easy to comprehend because all the conditions are simple Boolean values but complex expressions joined with the logical operators may present more of a problem.

The 'happy' or 'sad' scenario can provide some additional insight into the meaning of these operators and the equivalence of implication to if ... then.

Suppose a person's happiness results from a number of conditions happy_1, happy_2, happy_3, etc. The condition happy_1 might be (holiday and sunny), happy_2: (workday and got_a_lot_done), happy_3: (15 < temperature_today < 30) and so on.

If the conditions are joined with or

happy_1 or happy_2 or happy_3 or ...

when any one condition is satisfied this person will be happy. A nice thought but probably not a match for many requirements.

If the conditions are joined using and

happy_1 and happy_2 and happy_3 and ...

for this person to be happy all the conditions must be satisfied and they are likely to be sad most of the time. Additionally some conditions are mutually exclusive, a holiday cannot also be a workday, so this person will never be happy. To avoid this problem such conditions can be grouped using or but do not forget the brackets

(happy_1 or happy_2) and happy_3

Note that conditions containing not produce mutual exclusion when not is applied to the same variable or expression. Therefore we need to write

((not workday and sunnyor ( workday and got_a_lot_done)) and ...

Missed mutual exclusion may account for the occasional condition that is never True or always True. De Morgan's laws can sometimes simplify expressions or reveal hidden problems when not is used:

not(P and Q) is equivalent to (not P) or (not Q)

not(P or Q) is equivalent to (not P) and (not Q)

Equivalence means 'and vice versa' in both cases. These laws are fairly easy to remember: and -> oror -> and when the bracket on the left is 'multiplied' out.

Implication and short-circuit evaluation

Using or to deal with mutually exclusive conditions only works when exclusion can be guaranteed. Implication provides a catch-all alternative. Suppose that in a language (not Python) there is an implication operator imp and we can write

(holiday imp sunnyand (workday imp got_a_lot_done) and ...

Referring back to the truth table for implication we see that this operation returns True in all cases when its initial argument is False. If it is not a 'holiday' then (holiday imp sunny) evaluates to True whether or not it is sunny.  The falsehood that would have been got from (holiday and sunny) is no longer present and evaluation of workday can be taken into account.

Similarly (workday imp got_a_lot_done) evaluates to True when it is not a workday. This implication is therefore logically equivalent to the if statement in a function

def happy_2(workday, got_a_lot_done):    if workday:        return got_a_lot_done()    else:        return True

Additionally, got_a_lot_done(), shown as a function passed as an argument to happy_2(), will only be executed on a workday. Even better, possible double evaluation of workday in the inline version of implication

(not workday or (workday and got_a_lot_done()))

Is eliminated by else and the unassailable logic that if it is not a workday then workday is False.

Python provides short-circuit evaluation of expressions containing the Boolean or and and operators. When evaluating an expression P or Q, Q will only be evaluated if P is False. If P is True then P or Q must be true whatever the value of Q and Q need not be considered. Evaluating P and Q, Q will only be evaluated if P is True.

Apart from improved efficiency, short-circuit evaluation is useful because P can guard Q against evaluation. Searching for a particular value in a sequence of values for example, execution of the search loop can only proceed if indexing into the sequence has not gone beyond its end:

while index < length_sequence and sequence[index] != required_value:    index += 1 # increment index by 1

Here index < length_sequence guards sequence[index] != required_value: against evaluation when index is out of range.

Sadly, an implication operator P imp Q cannot be implemented as a general purpose function and maintain short-circuit evaluation of Q when Q is an expression. The Q expression must be evaluated before imp() can be called.

Python allows Q to be turned into a function and passed to imp() without evaluating it, as in happy_2()above, but then we get the problem of passing any arguments that function Q() requires. It seems custom rather than general purpose implication-functions may often be required but the logic of implication can be extremely useful so any substitute for an operator is worth considering.

Consider the common occurrence where various actions must be selected depending on the current value of a variable, actions in relation to a student's score in a test for example. A low score results in one set of actions but the boundary between a mid-score and a high score is not only flexible but could depend on an individual student's past performance. If Johny has shown a marked improvement he should get a commendation, perhaps.The ranges for mid score and high score overlap and special assessment is only done for marginal cases as part of high score actions.

These simple requirements can be implemented with a nested if statement:

def low_score():    print('do low score actions')    return Falsedef mid_score():    print('do mid score actions')    return Truedef high_score():    print('do high score actions')    return Truedef test_score(score):    evaluation = False    if score < 40:        evaluation = low_score()    else:        if 40 <= score <= 85:            evaluation = mid_score()        if 75 <= score <= 100:            evaluation = high_score()    return evaluation

For more complex requirements a nested if can be come very complicated and error prone and may end up overflowing into the right margin.  Implications linked by and can reduce these statements to a single line of code although, for clarity, it will generally be better to put each implication on a separate line:

def imp(A, B):    if A: return B()    return Truedef test_score(score):    print('score', score)    evaluation = imp(score < 40, low_score) and \               imp(40 <= score <= 85, mid_score) and \               imp(75 <= score <= 100, high_score)    return evaluationprint(test_score(39),'\n')print(test_score(50),'\n')print(test_score(80),'\n')

In test_score, evaluation of the Boolean expression is secured by assigning its value to evaluation. A score less than 40 only requires the actions for a low_score. The low_score() function returns False and score evaluation ends. The effect is similar to including a break in a C or Java case statement.

The ranges for mid_score and high_score overlap so the mid_score() function returns True, score evaluation continues and the possibility of a high_score is considered.

A lot of functions get called here but this is not a pure functional approach because the actions of low_score() etc. rely on side effects from the functions. Possibly it could be made pure-functional by making these functions return a new evaluation with the required Boolean equivalent in each case but that is definitely something for later consideration.

To summarize: conditions joined by or test for success, any success. When a condition evaluates to True further conditions are not relevant. Conditions joined by and test for failure. When a condition evaluates to False the complete expression is False.

Implications joined with and introduce a test for applicability returning True if it fails allowing evaluation of further conditions. Under short_circuit evaluation a P imp Q operation is both logically and functionally equivalent to if P then Q. It can be replaced with a function containing such a construct but conditional evaluation will only be maintained if the logical expression Q is represented as and passed as a function.

When conditions are joined with or a similar test for applicability requires an operator that returns False in every case where P is False. This is of course the and operator only returning True when both P and Q are true. Operator precedence means that brackets are not required but to show the logic clearly they may be advisable:

... or (P and Q) or (R and S) ...

Summary

In Algorithms 5 and 6 we have looked at some of the features of Boolean logic including operator precedence, short-circuit and eager evaluation and the results of applying conjunction, disjunction and implication to Boolean values. I hope I have convinced readers that implication can be of some use apart from supplying yet another operator to be evaluated in truth tables. In Python the logical equivalent provided by all data types supplies a useful paradigm that can simplify code in many situations but overuse should perhaps be resisted.

Bitwise logic, generally the application or the bitwise operators & (and), | (or), ~ (not) to all the bits of integers used as arrays of individual Boolean values, has been left aside for later investigation.

[Algorithms 7: Logical reality   (c) Martin Humby 2015]

Share post
Attachments:

## Algorithms 6: Logically speaking

Visible to anyone in the world
Edited by Martin Thomas Humby, Sunday, 26 July 2015, 20:06

To write software that behaves how we want it to a line of reasoning we have devised is hard-coded into the system. In most cases Boolean logic suffices: a condition is true or false, there is no probably about it. Software that uses this kind of reasoning, and sometimes probabilistic reasoning as well, can be described as being rule based with logical expressions providing the rules.

Python supplies two expressions to evaluate the truth of a condition and take appropriate action: if<condition is true> and while <condition is true>. The other loop construct for <variable> in something is predominately unconditional: the only time such a loop will not be entered is when something is an empty set or sequence or an empty range of values. This is not the case in C etc. where a for loop is a while loop with optional extras.

Comparisons

The comparison operators provide basic generation of Boolean values. To investigate their usage the Python Console supplies a convenient environment. It works like a calculator with access to the full range of Python syntax and provides immediate evaluation of results: no print statements required. It can be run from a command window with C:\Python34\python.exe but it may be easier to start PyCharm and select the Python Console tab on the bottom bar. Simply type in comparisons at the >>> prompt and press return. My session went as follows:

Python 3.4.2 (v3.4.2:ab2c023a9432, Oct  6 2014, 22:15:05) [MSC v.1600 32 bit (Intel)] on win32>>> 10 == 10    # equality 'equals'True>>> 10 != 10    # inequality 'not equal'False>>> 10 < 10     # strictly less thanFalse>>> 10 <= 10    # less than or equalTrue>>> 100 > 42    # strictly greater thanTrue>>> variable = 42 >= 42     # greater than or equal >>> variable                # display the value at the variableTrue>>> a = 30>>> 0 < a < 42  # operators can be chainedTrue

Note how a Boolean value can be assigned to a variable and stored for future use.

Functions can provide an additional source of Boolean values:

def exclusive_in(x, numbers, y):    return x in numbers and x is not ynumbers = {1, 16, 42, 101, 10249561} # a setx = 10249561y = 10249561print(exclusive_in(x, numbers, y))output: False as it should bey = int(10249561.0)print(exclusive_in(x, numbers, y))output: True as it shouldn't be

Without going into any great detail the function simply accepts the values passed to it when it is called by writing exclusive_in(x, numbers, y) and returns the result it calculates. The innot in operators are useful for checking set membership as above and for checking whether an object is present in a list and several other data structures.

The is and is not operators check whether the object at a variable is the same object as that at another variable. Their usage is quite specific and, as apparent from the output from the test code calling exclusive_in(), they should not be used as a substitute for == or !=. A correct return from the function would be:

    return x != y and x in numbers

A legitimate usage for is or is not checks the type of an object

    isint = type(obj) is int # int being the class of integer

The names of some commonly and not so commonly used built in types are: objectint(integer),bool(Boolean), floatcomplextuplelistsetstr (string) and dict (dictionary: a hash map).

In Python all types have a logical equivalent acceptable to if and while. Any object can be cast tobool(obj) but doing this is usually unnecessary. Integers and floats are counted False for a zero value, True otherwise. An empty set, sequence or dict presents False, True if it contains any elements.

Having obtained the conditions applying to a particular situation as above it may be necessary to introduce further reasoning by combining these values. Results can be combined using the logical operators, also called the Boolean operators.

The logical operators

The Python Boolean operators orandnot accept the values True and False and all other types. The not operator reverses the truth of its operand always returning a Boolean. It accepts a single operand placed on its right making it a unary operator. The other binary operators return a type that depends on the type and order of their two operands but two Booleans always result in a Boolean.

Because all types have a logical equivalent the actual type of the return is generally not significant. The only reason this eminently useful Python attribute needs to be considered at this stage is because there is no Boolean exclusive-or operator. There is however a bitwise exclusive-or used to combine the bit patterns of integers. So, can this operator be used as a substitute for a logical xor with impunity?

It seems that it can, with the proviso that when relying on the logical equivalence of types other than bool, or integers except 1 and 0, those types must be cast to bool(<object>) before they can be used. Also, the fact that ^ has a higher precedence than the comparison operators can lead to problems. For example, if

p = 0; q = 1; r = 1; s = 0 thenprint(p < q ^ r < s)

does not give the same result as

print((p < q) ^ (r < s))

Tabulating the operators in ascending order of precedence:

 operator name return type notes or or - disjunction ? (p ^ q) or (p and q) and and - conjunction ? p and q implication p -> q ? not p or (p and q) not not - negation Boolean not p membership, identity, comparisons Boolean in, not in, is, is not, <, <=, >, >=, !=, == ^ exclusive-or ? (p or q) and not(p and q)

A more extensive precedence table can be found in the Python documentation athttps://docs.python.org/3/reference/expressions.html#operator-precedence.

Logical operator precedence works in a similar way to arithmetic operator precedence BODMAS but BEBNAO (brackets, exclusive-or, basis, not, and, or) is probably less helpful. The comparison operators and everything else, apart from if else, while (and lambda expressions), have a higher precedence than the Boolean operators. This means that we can write

    v = a < b and b < c and d > c

without any brackets around the comparisons. If all the comparisons return True then True is assigned to the variable v. If any comparison returns False then False is assigned.

However, it is worth keeping in mind that

    v = p or q and r or s

gets

    v = p or (q and r) or s

not

    v = (p or q) and (r or s)

The or operator means 'One, the other, or both.' This usage is universal throughout mathematics and computing but everyday usage tends more towards exclusive-or. 'We can go to the beach or we can go to the forest and bathe in the hot springs.' Clearly there will not be time to do both!

Truth tables

A way of illustrating the results got from Boolean operations is to prepare a truth table tabulating outputs for all possible inputs with say T - true,  F - false

| p | q | not p | p and q | p or q | p xor q | p => q | p -> q |+---+---+-------+---------+--------+---------+--------+--------+| T | T |   F   |    T    |   T    |    F    |   T    |   T    | | T | F |   F   |    F    |   T    |    T    |   F    |   F    | | F | T |   T   |    F    |   T    |    T    |   T    |   T    | | F | F |   T   |    F    |   F    |    F    |   T    |   T    |

This table is of course computer generated to avoid any possibility of error and, for bigger tables, a level of boredom. The column titles use xor to represent exclusive-or and there are two implication columns => and ->. The xor column values are got using bitwise ^. Generating the => column uses not p or(p and q). The -> column uses a bitwise equivalent.

The truth table generator is included in the attached .zip. It can be used to generate a truth table for any number of logical expressions with any reasonable number of variables by following the instructions at the top of truth_tables.py. It works by generating a binary number 1111 when there are say 4 variables p, q, r, s and counting down to zero. A binary 1 represents T - True, 0 represents F - False. The script truth_tables.py uses several other techniques that have not been discussed yet but may be worth a look.

The reason that the values of p and q show T, T in the top row in the table and F, F at the bottom is because this appears to be the computer science convention. Engineers being a more practical breed start with 0, 0 at the top and end with 1, 1 when considering logic gates. Now there's a thought! If I reversed the logic of the function that converts a 1 to a 'T' and a 0 to an 'F' would the other-way-up table still be correct?

This file also includes a table for the Boolean expressions used to demonstrate precedence above but as-is it outputs a table with only the values of p, q, r, and s shown. I invite you to copy and paste the blank table into Notepad or similar, perhaps print it out, and fill in the truth values. Then delete blank True from the tt.print_table() call and check your answers against what the computer thinks. You can of course do the same with any other truth table, one from the listings or one you have initialized yourself.

To get the code into PyCharm extract the .zip to your equivalent of C:\Users\Martin\PycharmProjects\ALGORITHMS\. Then with PyCharm open do File > Open and navigate to the Logic folder. Highlight the folder (not any of the files it contains) and click [OK]. Select Open in a new window [OK]. Expand the project tree and double-click on truth_tables.py to open it. Run the file.

Summary

Here we have looked at how Boolean values are generated by code, combined using conjunction and disjunction and how logical expressions can be investigated using truth tables. Algorithms 6 will look deeper at the meaning of these logical operations and consider implication and its equivalence to if condition then do something...

[Algorithms 6: Logically speaking   (c) Martin Humby 2015]

Share post

## Algorithms 5: Functions as entities

Visible to anyone in the world
Edited by Martin Thomas Humby, Thursday, 30 July 2015, 20:28

Variables as functions

In common with many current languages, Java being the exception, Python functions can be assigned to a variable. If we get fed up with typing result = new_total(old_total, operator, number) for example, we could write

ntot = new_totalnew_tot = ntot(old_tot, opr, num)

Note that when a function is assigned only the function name is used, no brackets, but when the function is called using the variable, the variable name is used as a function name. The brackets and the argument list if there is one distinguish a function call from the function itself.

Python functions are first-class data types which means they can be assigned to variables and passed to other functions. Passing one of a number of different functions can tailor functionality to suit differing requirements when called by the recipient. Storing functions as elements of a data structure these functions can be called to handle events in a way relating to the requirements of a particular instance of the structure.

On a form in a graphical user interface for example, functions supplied by the form and assigned to controls like buttons can dictate what happens when a particular button is clicked. The possibilities got from passing and storing diverse elements of functionality defined by functions are endless.

Functions that return functions

When a function is passed as an argument the receiving function can of course return it. Alternatively a function can return another function that has been defined in same file or imported to it but commonly a function will define a function nested within its body and return that. A function so defined has read-only access to the local variables of the enclosing function including its parameters. It cannot assign to these variables but can modify any mutable types the locals reference.

A nested function can be called within the enclosing function, returned by it and/or passed as an argument to another function. Whether or not a recipient calls the function it receives is of course its option but if it does the passed function can extend the recipient's functionality in some way perhaps based on the values of the callers locals.

Below, passer() defines a nested function lister() and passes it to user()

def user(lister):    new_list = lister()    new_list[0] = 33    print(new_list)def passer(a_list):    def lister():        return a_list[:]    # returns a copy    user(lister)    return listera_list = [1, 2, 3, 4]a_lister = passer(a_list)print(a_list)               # check original list unaffecteda_list[3] = 66user(a_lister)output: [33, 2, 3, 4][1, 2, 3, 4][33, 2, 3, 66]

The function passer() demonstrates the two main options: it defines lister(). A lister()instance is passed to user()as an argument to the call and also returned by passer(). To avoid any possibility of modification of the original list, lister() returns a copy. This copy is not made until lister() is called by user. If it was never called no copy would be made.

Doing this implements a strategy called lazy evaluation: functions rather than the results of functions are passed and not called until their result is required. If on occasion the logic in the recipient does not require the function's return value, it is not called. The enclosing function's locals supply a means of passing values to such functions. Here passer()'s local a_list remains in existence after passer() exits and will stay so while any reference to this particular lister() instance remains.

After checking a_list has not been modified main code makes a modification to the list and calls user(). As can be seen from output main code's modification is reflected to the reference to the list retained by lister().

The name of the nested function is never made public and, because it is quite simple, it could be replaced with an anonymous lambda expression:

def passer(a_list):    lister = lambda : a_list[:]    # returns a copy    user(lister)    return lister

The syntax of Python lambdas is quite different from standard functions. The keyword lambda is followed by a comma-delimited list of zero or more parameters separated by a colon from an expression that supplies the lambda's return value:

def add_delta(delta):    expr = lambda x, y: (x + y)/2 + delta    return expr

Requiring only a single line of code to define and return the required function lambda expressions can be placed inline:

def implies(P, Q):    if P: return Q()    else: return True# in client code that uses implies():... and implies(workday, lambda : percent_done(today) >= 95) and ...

In implies(P, Q) execution of Q() is conditional on P being true. If P then the lambda is called and only then will percent_done() be called with the value of today that accompanies the lambda. Perhaps percent_done()will call another function, request user input or access a database. A great deal of processing power can be saved using this strategy.

Nested functions can be seen as existing in an environment - the enclosing-function locals used by the nested function. Together this environment and the nested function form a structure known as a closure.  As for any other structure, a closure can be returned by the enclosing function, assigned to variables or passed to other functions.

[Algorithms 5: Functions as entities    (c) Martin Humby 2015]

Share post

## Algorithms 4: Indeterminate arguments, complex returns

Visible to anyone in the world
Edited by Martin Thomas Humby, Sunday, 26 July 2015, 22:05

Passing an indeterminate number of parameters

Python matches arguments in a call to parameters in the parameter list by position

def func_a(a, b, c):    ...     val = func_a(d, e, f)

Argument d is matched to a, argument e to b, etc.

Sometimes it is useful to be able to write a function that that accepts any number of arguments. print() is one such function. When this is so prefixing the last parameter in the positional list (perhaps the only parameter) with a * character indicates it is to accept a sequence of one or more arguments:

def func_b(a, b, *args):    ...val = func_b(d, e, f, g)

Here d and e are matched to a and b positionally but f and g are wrapped to form a tuple which is passed to args. Tuples can be created on the fly and unpack transparently:

tup = (a, b, c, d)      # this creates a tupletup = 1, 2, 3, 4        # brackets are optionald, e, f, g = tup        # the tuple is unpacked

Transparent unpacking cannot be used in the function body when the number of arguments in the tuple is unknown: a mismatch generates an error. Arguments in the tuple are accessed by indexing into it, arg = args[index], or by iterating over it in a for loop

def func_b(a, b, *args):    for arg in args:        # do something with arg

Another way of providing a function with optional parameters is to use key-word arguments. This kind of argument is doubly useful because inclusion of a particular argument in any call is optional and the argument supplies a default value when not included:

def func_b(a, b, *args, h=None, i='', **kwargs):    if h == None: print('h holds default', h)    else: print(h)    print(i, kwargs, sep = ', ')val = func_b(a, b, c, d, i='custard', j=False, k=True)output:    h holds default None    custard {'j': False, 'k': True}

The argument h is not included in the call so it is initialized with its default value None. Additionally, including the optional parameter**kwargs prefixed with two stars, allows zero or more additional keyword arguments to be passed to the function. These additional arguments are passed as a dict (dictionary)  and access to them is obtained by writing

j = kwargs['j']    # no stars here

for example. If j was not included in the call an error would result so for optional additional key-word arguments we need to write

if 'j' in kwargs:    j = kwargs['j']else:     j = 'j default'

Tuple function returns

With one notable exception interpretations of the requirement that a function must only return a single value has been a problem in computing from day one. LISP of course provides the exception. Languages that rely on var parameters or the like to sidestep the single return restriction create an environment where mistakes are all too likely. Such languages feature pass by value conjoined with pass by variable address conjoined with pass by reference.

This is fine in a low level language like C where all is explicit, less so in languages that have pretensions towards a level of abstraction. In any event allowing modification of non-locals at diverse points in a function is not conducive to data integrity when there is some obscure error in the logic or when an exception is thrown.

Python's answer is to supply a single return as a tuple of multiple values that can be unpacked transparently to a list of variables:

def swap(a, b):    return b, a    # construct a tuple and return ita, b = 1, 2print(a, b)a, b = swap(a, b)print(a, b)output:1 22 1

If the number of variables in the list does not match the returned tuple an error occurs and thus return of a single item of known composition is maintained.

Transparent construction and unpacking of tuples allows multiple assignments, any to any. So, in reality, a swap function is not required to exchange the values at two variables. The construct

a, b = b, a

is all that is needed.

After all the millions of functions that have been written to modify data using side effects it seems unlikely that tuple returns will see much general usage in the near future but in this series I will use this return type whenever it is indicated.

Summary

In Algorithms 3 and 4 we have looked at the usage and structure of functions and the scope of entities within a program including local and other variables. Lazy evaluation and closures will be considered in Algorithms 5.

[Algorithms 4: Indeterminate arguments, complex returns    (c) Martin Humby 2015]

Share post

## Algorithms 3: Functional integrity

Visible to anyone in the world
Edited by Martin Thomas Humby, Monday, 27 July 2015, 13:03

Algorithms 2 implemented a very simple algorithm as a program. Despite its simplicity it still relied on two built-in functions to get user input and display output. In all but the most trivial of programs, modularity in the form of functions, objects, classes, components, modules and packages is used to reduce the unassailable complexity that would otherwise arise.

Regardless of complexity, an algorithm will often have applicability extending beyond a particular program and it is generally useful to separate this functionality from program specific code by factoring it off to a separate function. This article takes a look at functions and some other basics, both in general and as implemented by Python.

You may be already acquainted with functions to a greater or lesser extent. A function is a relation that maps an element from a set of inputs to exactly one element in a set of outputs. A relation is defined by input to output mappings: the same input always produces the same output. Two or more inputs can produce the same output or every input can map to a different output. The relation's inner workings, when they extend beyond simple mapping, need not be known. They do not form part of its specification and quite possibly there may be several different algorithms that implement the same relation.

Exactly one output is crucial for a function. Functions often use other functions and we cannot make use a function that returns one output for a particular set of inputs, two for another and fails to deliver at all for a third. The meaning of this single output restriction will be discussed in greater detail later in the next article.



The parts of a function

An application program, application or app. (called this to distinguish application software from system software - the operating system for example) can generally be identified by its performing some task or tasks for users or another application program. It is a top level unit of modularity providing an interface to its users that exposes functionality and hides inner workings, its implementation. Functions exist at the other end of the scale but still present an interface hiding details of the function's implementation.

Consider a function that accepts two numbers, total and number with a string representing an operator. It applies number to total using the operator corresponding to the string and returns the result:

The function name, new_total, and its parameter list are visible to client code that uses the function. The function's parameters may also be referred to as its formal parameters or arguments. The function body, in the function's implementation, is hidden from client code.

Inside the function body its parameters become local variables. Such variables only exist while code in the function executes and are then gone forever. We say the scope of these variables is the function. Assigning a new value to a local variable has no effect on anything outside the function even when an external variable has the same name as the local.

The local variable result declared within the function is even more local than its parameters: there is no way the value of result can be changed externally. However, when the function is called by client code the values of the arguments to the call are passed to the function initializing its parameters to the same values as the arguments. Below old_total, operator and number are passed to new_total() and the return assigned to updated:

old_total = 101operator = '+'number = 7updated = new_total(old_total, operator, number)

The arguments (old_total, operator, number) are sometimes referred to as the actual parameters of the call. The result of the function is returned to the caller where it can be assigned to a variable, used in an expression or simply discarded.

Placing the function in an expression makes assignment to an intermediate variable unnecessary

cumulative = old_total + new_total(old_total, operator, number)

But if we write

new_total(old_total, operator, number)

The function is called but its return value is not used. Doing this is pointless in this case: new_total() has no effect other than returning a value. Other functions may have useful side effects. The built in function print() for example, sends output to the screen.



The scope of variables and functions

Python requires declare before use. The scope of a variable or function is therefore from the point where it is declared or defined up to the end of the file it is declared in. When another file is imported variables and functions in that file will only be visible after the import statement. There are two import options. An entire file or files can be imported:

import math, array

but when anything from the import is referred to it must be prefixed with the file name

print(math.pi)

Alternatively, specific items from a file can be imported and optionally given an alias:

from math import sqrt, pifrom math import sqrt as root, pi as pyeprint(sqrt(2))print(root(3))print(pi)print(pye)

Doing this the file name prefix is not part of an item's name in the current file but all three options can be used together.

When a file is imported any executable code it contains is run to provide for initialization of variables in the import. However, execution can be made conditional by putting code, and any import directives, within an if statement:

if __name__ == '__main__':    inc_count()    import math    print(math.pi)


With the __name__ condition shown code placed within the if for testing or debugging will only be run when the file itself is run not when it is imported.

Within the body of a function all items whose scope it falls within are visible. Any external function can be called and there is read only access to external variables. Optionally a function can be given additional write access to a variable by declaring it global within the body:

count = 0def inc_count():    global count    count += 1  # read this as increment count by 1

Possibly useful for file initialization, maintaining a counter for debugging or performance evaluation but generally to be avoided, I think.



Function return values

A return statement in a function stops its further execution and the value from any variable or expression that follows the return is passed back to the caller. There is of course a variation to this no further execution rule. When the return is inside a try - finally block, code in the finally part will be executed come what may. This option can be left aside for the present.

There can be as many return statements in a function as required each return representing a separate exit point. Using this style new_total() can be implemented as

def new_total(total, operator, number):    if operator == '+':        return total + number    if operator == '-':        return total - number    # etc.    return None

When operator is not a '-' execution drops through to the next test of operator, and so on. The elif: (else if) can be replaced with a simple if: there is no need to step over tests that follow when a return takes care of a success.

The return None statement is in fact redundant. When a function does not contain a return statement or return has nothing to return, None is returned by default. You can see None being returned by a built-in function that only produces a side effect and has nothing to return by executing print(print()).

However, returning None, perhaps to show that none of the conditions in a function was True, the return needs to be explicit for clarity as to what is intended. By returning None we can write

result = new_total(old_total, operator, number)if result != None:    ... # some actionselse:    ... # some other actions

Because None is logically equivalent to False in Python, we could write if result: ... but doing that is probably going to be wrong. A zero value, integer or floating-point, also has a logical equivalent of False so the else actions will be selected for all these cases. If that is the requirement then a remark in the code to explain this is essential but that will mean just as much typing as putting both conditions into if not (result == None or result == 0):.

Using single or multiple exit points

The opinion that a function should only have a single exit point is sometimes seen. This idea probably dates back to a very long time ago and supporters of old style Pascal. At that time Pascal was a language from academia that attempted to promote structured programming by enforcing a single exit at the end of a function, as opposed to C a language designed to cater for a specific range of needs in real world programming.

As far as I can see a single exit has nothing to do with structured programming. Execution in a function with multiple exit points follows the form of a tree. In an equivalent with a single exit point execution tends to the form a graph with nodes linked back to the trunk, an altogether more complex structure. Introducing variables only required to hold a return value until the end of the function introduces more clutter and room for error.

The only advantage of a single exit was simplification of resource deallocation by putting it at a single point, a non-requirement when try - finally and garbage collection is available. Go for simplicity and clarity over opinion with no logical basis and use single or multiple exit points as indicated to get these requirements, maybe?

Mutable and immutable

All the types we have looked at so far, integers, real numbers and strings, are strict values. Such values are immutable: they cannot be modified. Changing the value of the number 2 to make it 3 so all twos in a program become threes is an obvious absurdity. It should not be and is not allowed. We can change the value at a variable by assigning a new value but the value itself is inviolate.

This is not the case with all types. Some types are mutable and allow their content to be changed. When an instance of one of these types is assigned to a variable a reference to the instance is assigned. Assignment from that variable to another variable copies the reference and both variables now refer to the same instance.

Not surprisingly modifications to an instance from one variable are seen from all the variables that hold a reference to it. Similarly, when a reference to a mutable type is passed as an argument to a function, modifications to it as a local variable are reflected to all other references. Some locals are more local than others.

The built-in type list is mutable. It supplies a sequence of any type and elements can be modified by indexing into the list and assigning new values. In common with all similar Python types indexing is zero based, the first element is at index [0]. For example:

def increment_elements(numbers_list):    index = 0    for number in numbers_list:        numbers_list[index] = number + 1        index += 1a_list = [1, 2, 3, 4]increment_elements(a_list)print(a_list)output: [2, 3, 4, 5]

As can be seen modifications in the function increment_elements are seen from the variable a_list. Each iteration of the for loop assigns successive elements of the list to num. The loop can be made slightly neater by using the built-in function enumerate that returns both the element and its index:

def increment_elements(numbers_list):    for index, number in enumerate(numbers_list):        a_list[index] = number + 1

The enumerate() function complies with the single return rule by returning a tuple. The tuple is unpacked transparently to the variables index and number.

A tuple is similar to a list - elements have a specific and usually a meaningful order, but a tuple is immutable once created.

Summary

Here we have looked at the usage and structure of functions and the scope of entities within a program including local and other variables. Algorithms 4 looks at functions with an indeterminate number of parameters. Lazy evaluation and closures will be considered in Algorithms 5.

[Algorithms 3: Functional integrity   (c) Martin Humby 2015]

Share post

## Algorithms 2: An invariant Greeting

Visible to anyone in the world
Edited by Martin Thomas Humby, Sunday, 26 July 2015, 21:43

The exit behaviour problem found with the 'Greeting' program in Algorithms 1 was down to incorrect ordering of statements inside the while loop. To solve this problem start again and do a proper job this time.

Step 1: Write a general description of what we want the algorithm to do and how it might do it; call this an 'initial insight' if you so wish:

Output a general greeting statement. Read in a user's name and if the input is not an empty string output a more specific greeting that includes the name. Keep on doing these last two actions until an empty string is input.

Step 2: An algorithm is a list of precise instructions for solving a problem. All instructions must be unambiguous. By definition anything written in an entirely formal language like a programming language is unambiguous but this does not mean such a listing will match what is required or make it easy to see what went wrong when it doesn't.

Some intermediate form written more in the language of the problem than that of the implementation is often useful but the variety of styles for writing such algorithmic statements are numerous. Currently a pseudocode, formal English, style seems to be becoming more general than numbered lines with gotos so following this trend:

    Output a general greeting    loop        Request a name and read it in        if name is an empty string            break                    # break means exit the enclosing loop        Output a greeting that includes name    # the end of the loop    Output a goodbye message

Step 3: implement the code. Start PyCharm, the Greeting project is open or if you have closed it to start another select it from File >  Reopen Project. Select hello.py in the project tree and do Ctrl + C (copy). Select Greeting in the project tree and do Ctrl + V (paste) naming the new Python file, 'hello2.py' say. Edit code in the new file to be as follows:

print("Hello, g'day, its great to see you")
while True:    name = input('Please input your name >')    if name == '':        break    print('Hello', name + '. Welcome to algorithms and data structures in Python.\n')
print('\nBye')

Run hello2.py by right click on the new file in the edit window or on the project tree. Everything works as might be expected.

Python does not have a loop or a repeat..until. To locate a loop condition anywhere than at the apparent start use while True (loop forever) with break to get a conditional exit from the loop. In print('Hello', name + '. Welcome to algorithms and data structures in Python.\n') the string concatenation operator + is used to join name to the string literal and get the full-stop in the right place; \n includes a new line character in output.

A loop invariant

There is a property of loops called the loop invariant that guarantees correct working when chosen to relate to the task the loop performs (actually it only guarantees partial correctness because it does not ensure the loop will ever terminate, but that is something else). We already know that the loop in hello2.py works as required but a check that it complies with an invariant property may be instructive particularly in view of the perhaps unusual location of the loop condition.

The invariant must hold - be true, before and after each repetition of the loop (aka. an iteration). Basically this means that it must hold before the test allowing loop entry or re-entry and hold after this test whether or not the test returns true. The invariant for a loop summing a list of numbers, for example, states that the total of those already summed and those still to be summed is constant and equal to the eventual summation: the required result towards which the loop progresses.

Python does not allow assignments or other operations in conditions so it is unlikely that a simple loop-entry test will result in an invariant failure after the test when the invariant holds before it. However, as we will see later, side effects from calling a method of an object could result in a fail so the before and after requirement is worth keeping in mind.

So, what is the loop invariant for hello2.py? A possible requirement is that the variable name should hold a string with length greater than or equal to zero. This is far too weak: all strings have a length greater than or equal to zero.

A good invariant is that name must hold a string read anew from user input on each iteration. We could say that the required result embodied in the total of the invariant and the exit condition, the result the loop must progress towards, is that the user becomes more and more bored with inputting names and eventually presses return without entering one. The Algorithms 1 Greeting did not comply with this invariant helping me to write flawed code in hello.py.

In 'hello2.py' the invariant holds before and after if name == '':. and the location of the loop test is perfectly sound. The test marks the loop entry and re-entry point where the invariant must hold, while is just the point on the loop that execution jumps back to after each iteration. Any reasonable compiler will remove the while check that True is true and jump execution back to the input statement.

When the initial assignment to name takes place the loop has not been entered despite the fact the assignment appears after the while. To locate the test with while and maintain the loop invariant, name must be initialized with a user input before the loop:

name = input('Please input your name >')
while name != '':     print('Hello', name + '. Welcome to algorithms and data structures in Python.\n')     name = input('Please input your name > ')


There is nothing wrong with this version, in fact it looks somewhat neater than the original but that is down to Python syntax rather than any problem with the original code. Whether it is easier to read, having greater clarity, is a matter of opinion and possibly depends on what one is accustomed to. If the initial insight indicates a loop-exit located other than with the while why not put it there, maybe?

In this offering we have looked at three aids to developing correct algorithms: initial insight, pseudocode and loop invariants. I hinted that initial insight should include a specification of what the algorithm must do or achieve. This requirement ensures that we write the right algorithm, one that solves the problem we want it to. Pseudocode and loop invariants help us write the algorithm right. (Adages adapted from validation and verification of software systems as in Boehm 1979).

Reference

    Boehm, B. W. (1979) 'Guidelines for Verifying and Validating Software Requirements    and Design Specifications', TRW Redondo Beach, CA, USA [Online].     Available at http://csse.usc.edu/csse/TECHRPTS/1979/usccse79-501/usccse79-501.pdf    (Accessed 22 June 2015)

[Algorithms 2: An invariant Greeting(c) Martin Humby 2015]

Permalink 2 comments (latest comment by Martin Thomas Humby, Sunday, 28 June 2015, 13:00)
Share post

## Algorithms 1: Algorithms and data structures

Visible to anyone in the world
Edited by Martin Thomas Humby, Monday, 3 Aug 2015, 11:40

According to Niklaus Wirth, 'Algorithms plus Data Structures equals Programs' (Wirth 1976). Efficient and well constructed programs are an ongoing interest of mine so doing a series of blogs on this subject will provide an archive and an aid mémoire for what I have found out so far.

The easy to learn programming language Python looks to be favourite for implementing the algorithms and will be the main language used. Python has fewer gotchas and 'must do it this ways' than any other I have come across. The fewer the language features to get in the way the more scope there will be for the algorithms, or that is what I am hoping. To keep things simple Windows will be the host operating system unless there is a good reason otherwise.

The only prior knowledge assumed will be a general idea of the basic flow of control structures found in all mainstream programming languages: if something do this, else do that, and repeating actions while a particular condition applies. Other assumed prerequisites will be some acquaintance with the mathematical entities integer, real number, set and sequence and an understanding that in computing terms a string is a sequence of characters 'abc def', not something that holds up your tracksuit trousers.

Getting started

I am not among those who believe that typing code into a text editor and running it with the interpreter from the command line makes you a better person. This means the first requirement is for an Integrated Development Environment (IDE) to manage and debug the Python code as it is developed.

Jetbrains PyCharm is an easy to use IDE providing good functionality straight out of the box, no plugins required, and is the best bet to make a start with I think. Initially CPython will be used. This Python version incorporates the latest language revisions as they are agreed. It provides multithreading but does not support multiprocessing - only one thread can run at any one time. To investigate parallel algorithms another version will be needed. This may mean using a different IDE, but for the present let's keep it simple. Best to download and save the required installers before running them just in case something goes wrong.

The IDE requires Java version 6+ so go to http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html or where ever. This Java Development Kit (JDK) download includes the runtime environment (JRE) needed to run Java programs. Initially I will be using the 32 bit x86 version under Windows 7. Run this installer accepting default settings when offered.

Next go to https://www.python.org/downloads/windows/ and click on the latest stable release of Python 3 (at the top of the page).

Currently the link wil take you to https://www.python.org/downloads/release/python-343/ Select the installation for your machine (near the bottom of the page) Windows x86-64 MSI installer for a64 bit Windows. I ran Windows x86 MSI installer under 32 bit Windows 7. Generally accept defaults for installation but in the second dialogue that appears, 'Customize Python 3.4.3',Scroll down the list of options and turn on Add python.exe to path. A detailed installation guide can be found at http://www.howtogeek.com/197947/how-to-install-python-on-windows/

Having installed Java and Python we are ready to install the IDE. Download PyCharm Community Edition from https://www.jetbrains.com/pycharm/download/ and run the installer. Accept defaults but on the second screen check the box to associate the IDE with .py Python files. When installation finishes check the box to run PyCharm. Accept defaults and eventually a dialogue opens. Click Create New Project and an oversized approximation of the New Project dialogue we will be using from now on appears.

The point to note is that the text in the top edit box includes the project title at the end. Change this text to your version of C:\Users\Martin\PycharmProjects\ALGORITHMS\Greeting, maybe?

Create the project and after its initial 'Indexing', taking PyCharm quite some time, you should see something like that shown below. The tab bars at the bottom and side can be hidden or shown by clicking the icon that looks a bit like a grey monitor in the bottom left corner and the tabs themselves dragged into position at the ends of the bars or between bars according to taste.

If the project tree is not visible it can be shown by clicking the Project tab or double clicking on the project folder in the tool bar labelled Greeting. Right click on Greeting in the project tree (not the Greeting on the tool bar) to get the context menu and select New > Python File as shown. Name this file 'hello' and it opens in the editor window.

A greeting algorithm

As a starting point a simple requirement might be to develop an algorithm that will read in the names of any users who happen to be around and output a greeting that includes the user's name. If no name is input when the return ('Enter') key is pressed the program that implements the algorithm terminates.

A first stab at doing this might be the following lines of Python code:

print("Hello, g'day, its great to see you\n")while name != '':        # while name is not equal to an empty string    name = input('Please input your name >')    print('Hello', name)    print('Welcome to algorithms and data structures in Python.\n')print('\nBye')

Readers who are already acquainted with Python can skip this section, otherwise a few features are worth noting:

print() is a built in function so there is no need to write it or import it from somewhere else (in fact it is imported by default from a package 'builtins'), The print() function simply outputs whatever arguments are passed to it, as many as you like. By default the results of these arguments are separated by spaces in output and print() puts the cursor on a new line afterwards but this behaviour is easily modified. input('user prompt ') is another built-in function to read user input.

When looking at Python code posted on the internet much of this is in older versions of Python. Prior to version 3 print was an operator - print 'something'. This code will get an error under 3 but is easily fixed by inserting brackets - print('something').

The # character indicates the start of a remark or comment: explanatory text not executed when the program runs.

"Hello, g'day and great to see you!" is a string literal. Such strings can be enclosed in double quotes " ", in which case the string can include single quotes as in "g'day". Alternatively the start and end of the string can be indicated by single quotes and it can contain " characters.  '\n' included in a string represents a new-line character over and above anything print()may add to output.

In Python each line of code generally comprises a program statement. To start a new statement simply put it on a new line. Terminators such as ; are not required but a semicolon can be used to separate multiple statements on the same line if you must. Very long statements can go on multiple lines joined by a back-slash \ but this is seldom required and is not always compulsory.

Compound statements, like while are terminated with a colon :The body of the statement is defined by indentation, no curly brackets or begin ... end required. Thus indentation is critical, it is the only indication of the statements forming part of a compound block and separates those statements from any statements that follow, not part of the block. In 'hello.py' above the compound while includes the three indented lines of code.print('Bye') is not part of the while and is not indented. Python compound statements are:

    while <condition>:    for <a_variable> {,<a_variable>} in <interval | sequence | set>:    if <condition>:    elif <condition>:     # else-if    else:    try:    except [<exception class>[as <variable>]{,<exception class>[as <variable>]}] :    finally:    with <an object> [as <variable>]:    def <a_function>:    class <a_class>:

In these definitions < > brackets enclose an item description and are not Python syntax. Likewise | indicates alternatives, [ ] brackets enclose an optional inclusion and { } braces enclose an option repeated as many times as may be required. Above, the last two cases are function and class definitions respectively, rather than executable compound statements as such but the indentation rules still apply.

Using the IDE the Tab key inserts an indentation at the cursor, the delete back-arrow key deletes the entire indentation and Shift + Tab removes an indentation from the line the cursor is on. If one or more lines are highlighted by dragging the cursor all the selected lines are relocated with Tab or Shift + Tab, handy when major code restructuring is required. Try highlighting the three lines of code after the while and shifting them around, for example.

Add the Python code shown above to hello.py.

Running the program

The next step is to run the Greeting program and see what happens. Any Python file, also referred to as a script, can be run as it stands. No need for a main() function or a program file. The Python interpreter simply starts at the top of the page and works down executing any executable statements it finds on the way, looping and/or branching as directed by the code. If any other files are referenced in import directives these files are accessed when the import is encountered. Imports can go anywhere in a file but the default import of 'builtins' can be thought of as being right at the top of the page.

There are many ways of running a script from the IDE. A straightforward way is to right click on a file that is open in the editor or on the file-name in the project tree and select Run 'hello' from the context menu (next to the green triangle). Do this and an error message appears

The variable name is not defined. Reading down from the top 'name' has not been encountered before the while loop entry condition is found. As far as the interpreter is concerned this variable simply does not exist. Double click on the file path underlined in blue and the IDE will put the cursor at the start of the line where the error was found. If the required file is not open in the editor it is opened.

To overcome this problem something needs to be assigned to name before the while. Anything will get an entry to the loop apart from the empty string ''.. As can be seen from the assignment to name from the input() function, the assignment operator in Python is  = . We could write

    name = 42

This assignment can be read as 'Let name equal 42' or 'Name takes the value 42'. Some people like 'Name becomes 42' but this is too mushy for my taste: name doesn't become 42, it is still the variable name before and after the assignment.

Python makes no objection to the same variable holding an integer in one place, a string in another or any other type somewhere else but whether it is sensible to use this facility is another question. A general solution for the current requirement is to use the special object None. A variable holding None is never equal to any other variable except another one holding None.

Before the while type in the line

    name = None

Run 'hello.py' again and the 'Please input your name >' prompt appears. To input names click in the Run pane to relocate the cursor, type in the name and hit return. Do not worry when the cursor appears before the prompt, typing anything puts it in the right place. The program runs successfully and exits when return is pressed without typing a name. However, its exit behaviour is not really what is required.

If it is possible to mess up something this simple perhaps there is more to this algorithms thing that might at first be thought but further discussion will have to wait for another time.

##### Reference
    Wirth 1976, 'Wirth, N, Algorithms + Data Structures = Programs', Prentice Hall

[Algorithms 1: Algorithms and data structures      (c) Martin Humby 2015]

Share post
Attachments:

## M269 Demos

Visible to anyone in the world
Edited by Martin Thomas Humby, Monday, 3 Aug 2015, 00:11

Here are a few demos done while working on M269, perhaps of interest to someone. Also there is the AVL tree. It outputs trees with balance factors but to make it easily comprehensible its development path Node->BinaryTree->deletions->balance factors->AVLTree would need to be shown. The tree printing module is quite basic and more 'it works' than neat coding.

Share post
Attachments:

## Easy graph algorithms

Visible to anyone in the world
Edited by Martin Thomas Humby, Tuesday, 7 Apr 2015, 23:03

A Python implementation of a generic graph structure makes various algorithms from M269 Unit 5 easy to understand.

A graph comprises nodes (aka vertices) representing entities and edges, conceptualized as lines joining the nodes and representing the relationships between them. In real world applications the graph can model a situation involving relationships between concrete entities: towns and the roads between them, connection points on a PCB or microchip, buildings requiring provision of a cabled or piped service, the sides and base of a folded metal part, super-pixels in an image used for object recognition, and so on.

A graph can be directed: its edges can be traversed in only one direction, or not: two way traversal is permitted. Edge traversal may involve a cost and edges can have a weight representing this cost. A single edge joins only two nodes and where there is no relationship between the two no edge joins them.

In this implementation nodes are represented by a base class Vertex:

class Vertex( ):    def __init__(self, name, ID):         self.ID = ID                # integer identifier mapped to/from name        self.name = name            # any or multiple types relating to the application        self.edges = StableSet()    # addition order of vertices in the set is maintained        self.distance = 0           # basic attribute e.g distance from nearest town

Edges are represented by an Edge class:

class Edge( ):    def __init__(self, toVertex, delta = 0):        self.toVertex = toVertex    # the vertex at the far end of this edge        self.weight = weight        # the weight, cost, distanced got by traversing

By default a Graph object is directed – edges run in only a single direction toVertex, but a graph can be constructed as not-directed. When edges are added to a not-directed graph the vertices they reference are added, as for directed, but edges running in the reverse direction are also put to the set of edges for each additional node.

When a vertex is added to a graph any type can be used as its 'name': the identifier apparent to users.  A name is maintained as unique as is the integer identifier it maps to.

A sensible way to construct the graph from Unit 5 Figure 5.20 is

g = Graph(directed = False)g.addEdges('w', ('u',2), ('z',4), ('x',2))g.addEdges('x', ('v',6), ('y',3))g.addEdges('u', ('v',1))g.addEdges('y', ('v',5))g.addEdges('v', ('y',5))g.addEdges('z', ('y',1), ('v',6))

but

g.addEdges('w', ('x',2), ('z',4))g.addEdges('x', ('v',6), ('y',3))etc........... theng.addEdges('w', ('u',2))

gets much the same graph although algorithms such as depth first show different output because edges in the StableSet are in a different sequence and vertices are found in a different order:

['w', 'u', 'v', 'x', 'y', 'z'] vs. ['w', 'x', 'v', 'u', 'y', 'z']

To provide additional vertex attributes, as needed by Prim's algorithm for example, an extended class can be used:

class PrimsVertex(Vertex):    def __init__(self, name, ID):        Vertex.__init__(self, name, ID)        self.currentNearest = None

A Graph will construct any class of vertex or edge passed to its constructor. Now we can define Prim's algorithm as

Add the graph vertices to a minimum-priority queue with a start node distance zero, the rest with distance infinity.
While there are vertices in the queue:

Dequeue a current-vertex and examine its edges. If any edge-weight to another node is less than the other node's distance and the other is still in the queue update its distance changing its queue position and set its currentNearest vertex to the current-vertex.

The node at the front of the queue now carries the minimum distance to get to it from any node and thus from its currentNearest so append the edge, front's currentNearest -> front-node, to a list of such edges.

Join up the edges in the edge list by matching start node to end node to get a new graph, the minimum spanning tree of the original.

General application

This project was implemented as a quick guide to Unit 5 graph algorithms. To make it generally applicable some modification might be good. Name mapping to integers looks OK: vertices are in a lookup table getting fast access through the graph structure. Implementing not-directed using double directed edges is an easy way to do it. Whether this will get any snags for more advanced work is unknown.

Adding algorithm specific attributes by sub-classing is not generally applicable I would think. Could be better to handle these by making the algorithm function add these attributes and delete them afterwards.  The current minimum-priority queue is probably more a good laugh than a serious contender.

Single class implications

In current Python an object can be given a new attribute at any time simply by assigning a value to it, and subsequently removed using del()

    an_object.newAttribute = 0;    del(an_object.newAttribute)

This works with new or old-style objects. Providing the multipurpose Vertex.distance field is retained so that comparison works as before the impact on the algorithms from using this strategy is generally minimal. The exception is topological-sort requiring no less than four loops through the entire graph to deal with the required incoming edges attribute.

The zip has been updated to include these investigations in TopSort2.py, Prims2.py, etc.

To deal with TopSort requirements looked at some variations to Graph and Vertex and included a fromVertex attribute for Edge and an in_edges attribute for Vertex. Initial TopSort incoming is now aVertex.in_order(). These variations are in Graphs3.zip

Share post
Attachments:

## An AVL Tree in Python

Visible to anyone in the world
Edited by Martin Thomas Humby, Wednesday, 1 Apr 2015, 14:16

Seems to me that the workings of an AVL self balancing binary search tree are easier to understand if all functionality is either in the tree or in the nodes, one or the other.

Node functionality fits well when using recursion to navigate the tree but has the disadvantage that for an empty tree the root is null. Thus we see code in the tree itself with most method bodies prefixed with

    if root != None:

This implementation uses dynamic dispatch to overcome this problem. It features three classes of 'node': EmptyTree, a RootNode and Node. The behaviour of methods such as put(), to add a new key-data pair to the tree, and remove(key) vary according to the class of node they are called against.

If put(key, data) is called against an EmptyTree at the root, it creates a new RootNode using the key-data and assigns it to the root. When the tree becomes empty because all the nodes have been deleted the RootNode assigns an EmptyTree.

The power of Python allows the AVLTree class to be implemented without writing numerous methods that call the root. A single method binds the current root and its public methods to appropriately named instance variables:

class AVLTree():    def __init__(self):        self.root = EmptyTree(self)        self.setBindings()    def setBindings(self):        self.isEmpty = self.root.isEmpty        self.clear = self.root.clear        self.put = self.root.put        self.putTuple = self.root.putTuple        self.putKeys = self.root.putKeys        self.putIt = self.root.putIt        self.get = self.root.get        self.remove = self.root.remove    def __iter__(self):        return self.root.__iter__()    def __bool__(self):        return not self.isEmpty()

Everything else is hidden and access to the root is completely transparent. A method call is anAVLTree.put(key, data) for example, where anAVLTree is an instance of the class. Binding and rebinding only takes place when the tree's state changes from empty to not-empty or vice versa, infrequent events for most applications.

The code is in the zip. If you would like to have a look extract all files to the same folder. A few points: if something: is used rather than if something != None: or if something != []: etc. I also make use of the fact that int(aBoolean) gets 1 if aBoolean is True, 0 otherwise, thus

    def put(self, key, data = None, parent = None):        # return 1 if the height of the subtree from this node has increased, 0 otherwise        ..........        balance = self.balance        if key < self.key:            if self.leftChild:                self.balance += self.leftChild.put(key, data, self)        ..........etc.        return int(self.balance != balance and self.balance != 0)

And then of course there is += equivalent to Delphi's inc(a, byB) that may be unfamiliar to those who feel traditional Pascal represents a good style  ........

A complete explanation of adjusting balance after a rotation can be found at (Brad Appleton 1997) but note that Brad uses right_height left_height to determine balance while, by following maths' anticlockwise tradition and Wikipedia, I have ended up with the opposite.

One interesting point was that although in-order iteration while deleting nodes in the loop does not work (some nodes are not found) by doing this the tree is not corrupted. However, because I have used single linking, the tree can be cleared by assigning an empty tree to the root and in CPython all its nodes should be deallocated without waiting for garbage collection.

Update, stress test added: put 100,000 records, check they are in the tree, remove them all.

I was quite impressed by the time it takes to check the 100,000. Almost no time at all!

Share post
Attachments:

## Overriding equals() in Java

Visible to anyone in the world
Edited by Martin Thomas Humby, Saturday, 4 Oct 2014, 14:00

You can now attach files to a post so here is a NetBeans project giving the classes from the original article and a test file.

Permalink 2 comments (latest comment by Martin Thomas Humby, Saturday, 4 Oct 2014, 14:16)
Share post

## Numerous forms in a screen wide hidden MDI

Visible to anyone in the world
Edited by Martin Thomas Humby, Tuesday, 10 June 2014, 17:31

This type of interface has advantages when the 'screen' extends across multiple monitors: all forms remain under the control of a single menu bar, for example. But what happens when screen-space is restricted or there is a requirement to refer to more forms than can be accommodated in available space?

My solution is scaleable controls (buttons, memos etc.) that resize when the window is resized and a sidebar where forms can be docked and automatically resize to fit sidebar width with a height set by dragging the top or bottom edge of a docked form. In the demo shown below only one form can be expanded in the bar but it could be adjusted to allow several or all docked forms to be expanded. If forms contained only graphics rather than the demo's database application the bar could automatically resize to accommodate smaller thumbnails when expand all was selected.

Modified version of demo application from

Martin Humby 'Scaleable Controls: using information hiding and a systems view to design classes' The Delphi Magazine Bonus Articles 2007 - iTec.

Share post

## Another pasted background

Visible to anyone in the world
Edited by Martin Thomas Humby, Saturday, 7 June 2014, 21:52

Black Rain 1

Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 84729