Tuesday, 6 December 2016

Could making Indyref2 about the EU cost Scotland Independence?

Linking Independence and EU Membership could be decreasing support for independence and making it unlikely we will achieve either

Pro EU NO voters and Anti EY Yessers have switched sides in equal numbers
There is some evidence and a growing concern that pushing for continued membership of an Independent Scotland in the EU – a faulty organisation with laudable ideals – is causing support for Independence to stall or even go into reverse. Admittedly the only semi hard evidence we have as yet is a YouGov and a Times Poll, the first of which can be criticised on the basis of its methodology, but given the current state of politics and the surprises of 2914, 2015 and 2016 plus the fact the SNP lost their majority despite facing clueless opposition from Labour and slightly less clueless opposition from the Tories suggests the YES movement should take this risk seriously

The figure above shows how Scotland is divided on the Issues of Independence and EU membership together with the shifts in support for independence since the EU referendum(figures from Wings over Scotland). An initial surge in support for Independence has subsided because of almost balanced migration between NO and YES.

Scotland split in 2014 with 45% for independence and 55% against Independence. In 2016 the vote was 62% for EU membership and 38% against. The problem is we do not know how many Independence supporters wanted to be out of the EU but voted, with a heavy heart to remain thinking this would speed up independence and we do not know how many Independence supporters voted to Leave because they felt remaining in the EU would delay independence or simply felt the EU to be the greater of the two evils. All we have is snapshots of a popular will that may have changed.

Wings Across Scotland have (http://wingsoverscotland.com/the-gambit/), however, done some research and say that a lot of Unionist Remain voters want to remain in the EU and would back independence in order to do so, but are balanced by an equal number of YES Leave Voters who think staying with the UK is the only way to get out of Europe.

We should not consider Anti-EU Yessers who would vote NO as not really wanting independence. It simply means they consider the UK an apple with a small worm in it and the EU an apple with a large worm in it and want to go with the lesser of two weevils.

The Point also note that the recent National Survey is not being published, suggesting it backs up this anti-Independence trend.

In other words, the UK-EU issue is ripping apart Unionists and Independence supporters in about equal numbers leaving the most likely result of a second Independence referendum the same as the first.

This is, in Orwell’s Newspeak, which seems appropriate to these times, double plus ungood.
Worse, a number of staunch Yessers seem to be in denial about this risk

What can be done

Since linking these two issues is threatening independence the answer is to decouple them in such a way that both Pro-EU unionists can vote for Independence as a way of staying in the EU and Anti-EU YESsers can vote for independence as a way of getting out of the EU without having to stay with the UK. One suggestion is that the Scottish parliament should commit to a EU membership referendum within a few months of a YES vote in a second Independence referendum. This would doubtless be met with a groan by most of Scotland but would settle the matter as decisively as a NO vote in a second Independence referendum. More importantly it would decouple the two issues allowing Pro-EU unionists and Anti-EU Yessers to vote for independence knowing they would have their say and a chance of achieving their goals later.

If the Scottish Government committed to an election after Independence on the basis of EU membership they could be accused of running away from the mess they created with Independence. They would also incur the wrath of impatient anti-EU voters and possibly give a UKIP clone party (SIP?) an entry into Scottish politics. I think hardly anyone in Scotland wants that,
To avoid doubt my position is that I want to be in the EU in an independent Scotland.
However I want an Independent Scotland more than I want to be in the EU and, despite a touch of referendum fatigue I want EU membership to be decided after independence when we will have a government that can afford to be honest about the pros and cons. If the result is Scotland leaves the EU and I perceive the debate to have been fair and comprehensive I will accept the result even if I dislike it.

Denial will lose the next independence referendum.

I would rather be considered paranoid or overly pessimistic than see Independence lost by being in denial about electoral trends such as these. At present it looks like Brexit has lost us the gains made since 2014 and things are getting worse. In evidence I cite the failure to publish the National survey and a sudden pause in the Unionist ranting against a second Independence referendum, almost as if they now believe they might be abot to win one.
Once Article 50 is triggered the only way this amateur political strategist can see to boost the chance of independence is to commit to an EU referendum in Scotland soon after a YES vote. To make such a commitment before Article 50 is triggered would be a bad, possibly fatal move. Right now everything depends on the Pry Minister who has said it will be triggered in March 2017.

Unless of course Westminster decides to eject Scotland from the UK but that is currently very unlikely

Lets ensure that Leave and Remain YES voters can unite and that Independence is not seen as owned purely by the SNP

Notes and Further Reading

About the author:
A keen supporter of Independence who believes the Pen is as mighty as the March and has a wide range of interests other than Politics, from Technology, through history from a Political Perspective to the Paranormal and Anomalous Phenomena such as Poltergeists, UFOs and time slips to which they attempt to keep an open but skeptical mind while being amazed by human behaviour which makes poltergeists, UFOs, Timeslips and fairies look like child's play

Monday, 29 August 2016

Fibonacci sequence in Java C Fortran Python and Forth

Writing a programme to calculate Fibonacci numbers is nowadays an exercise in triviality but it is not futile to examine various ways to implement this calculation and look at how different languages offer other, perhaps more elegant ways to generate the sequence. 
In what follows only the heart of the Fibonacci algorithm is shown. Embedding it in a loop to generate the entire sequence is not considered nor is the need for arbitrary precision arithmetic, which is available for most languages nowadays. 
The closed form solution for the n’th Fibonacci number is not considered nor is parallelisation of sequence generation and, with the exception of Forth, only some established mainstream languages have been considered at this stage. 
The Fibonacci sequence is defined by the transformation of an ordered pair
(x,y) --> (x+y,x) with x >=y and y >=0
The first algorithm most people think of involves putting x into a temporary location
x --> temp 
x --> x+y 
temp -->x
Or alternatively
swap(x,y)  x --> x+y
We will call this the traditional algorithm. However other languages suggest other algorithms 
Java offers Lists, which can be considered as mutable arrays. This allows an algorithm that does not use temporary storage and which cleans up after itself. Transitioning to Java Big Integers is straightforward. There seems to be no advantage to adopting a Java 8 stream based approach here. 
// fibs is a  list initialised with two elements and is then transformed as shown below.
public  static List<Integer> nextFib(List<Integer> fibs)
        fibs.add(fibs.get(0) + fibs.get(1));
        return fibs;
To generate a list with the entire sequence, till the memory runs out
public  static List<Integer> nextFib(List<Integer> fibs)
         Long len = fibs.size();
         fibs.add(fibs.get(len-2) + fibs.get(len-1));
         return fibs;
Fortran is perhaps the oldest high level language and still has a following though it has largely been replaced by C. Early experience showed C to be easier to code than Fortran prompting a move to C.
Fortran seems only to allow the traditional algorithm and, at least using Eclipse for Parallel Application developers, requires a more boilerplate code than Java. The advantage of Fortran is speed, though again C has largely caught up. The code here puts a subroutine into a module and calls it from a main program thus altering the state of the input array. Fortran therefore offers little interest in this context but is included for historical reasons
  ! Given f(n-1) and f(N-2) compute  f(n)
  ! via F(n) = F(N-1) + F(N-2)
  ! Using an array of length two works round the fact that
  ! Fortran can’t return anything from a subroutine
  ! And does not have lists like Java
  module Fibonacci
             integer, dimension(:) :: F
             INTEGER TEMP
              TEMP = SUM(F)
             F(2) = F(1)
             F(1) = TEMP
     END subroutine FIB
  end module Fibonacci 
program fibonacci
    use Fibonacci
    implicit none
        Fibo = (/ 2, 1/)
        call FIB(Fibo)
        print *, Fibo
end program Fibonacci
The C code snippet here uses pointers, which are needed because C, Like Java, is call by reference. The interesting point is that the inputs to the method are references which are treated as pointers inside the method. Apart from that the algorithm s the traditional algorith and since C does not have built in memory management care must be taken to avoid memory leaks if this is used for generating high value Fibonacci numbers.
// given F(N-1) and F(n-2)  produces F(n)
void fib( int* Fnminusone,  int*  FnMinustwo)
     int temp = *Fnminusone;
     *Fnminusone  += *FnMinustwo;
     *FnMinustwo = temp;
// X = f(N-1) Y = f(N-2)
int x =3; int y =2;
fib(&x,  &y);
    printf("%d  %d", x, y);
Python is interesting because it allows multiple returns from a method. Under the hood this may involve swaps and data shuffling but this is hidden from the caller.
This is fast, simple and elegant. Also Python has arbitrary precision built in.
def nextFibs(x,y):
return x+y,x
Forth is a language like no other. It is a reverse polish stack based threaded interpretive language more akin to a platform independent assembler. Forth developers start with a set of Forth primitives called words and combines them to form new words. Here is a word that takes two integers of the stack and performs the Fibonacci transformation.
:NextFibs  dup rot +  ;
typing a number in the FORTH interpreter puts it onto the stack
dup duplicates the value on top of the stack and puts it on the stack. so 2 dup results in the stack holding 2 2
rot does a cyclic shift of the stack
+ takes the two numbers at the top of the stack, adds them and puts the result on the stack
so for example 2 3 + results in the stack holding 5
Thus 3 2 NextFibs does the following
dup: 3 2-->3 3 2
rot: 3 3 2–->3 2 3
+: 3 2 3 --> 5 3 
Of the languages used here Forth offers the leanest Fibonacci implementation and to my mind the most elegant. It is also hard to understand. The Python
algorithm is not quite so lean but easier to understand. Java Lists allow an algorithm that does not involve temporary storage and is fairly lean, though overall Java is a verbose language. Fortran shows its age and C, being call by reference like Java, uses pointers and references which many find confusing nowadays.
Assuming you want or need to program the sequence Python will allow fastest development, Java and Python will allow writing to files easily but Python scores having transparent arbitrary precision arithmetic. Forth, with an appropriate library may be the fastest option but tends to be a write-only language and will need good commenting.
The take home from this is that an apparently simple transformation like that involved for the Fibonacci sequence may be implemented in different ways in different languages and some languages allow more elegant implementations than others. When implementing a solution to a problem it is worth choosing the language to match the problem ( if you have a choice) and to look at different ways to solve the problem.

Saturday, 13 August 2016

Becoming a better programmer

You want to become a better programmer. First ask yourself why? Do you want to advance your career, get a better (paid) job, Get the acclaim if your fellow workers, or just out of professional pride? 

If you want to advance your career you are probably an employee or possibly a contractor. As an employee your options will be limited to what your employer thinks they need or you can persuade them they need. Some of the messiest code around is in enterprises where silver tongued employees persuaded their manager that they HAD to use the latest upcoming technology then left leaving code that uses half a dozen legacy technologies most of which are downgoing not upcoming so be careful what you wish for: you might end up maintaining it. Remember also that as you progress you will code less and less and plan and design more. 

Similarly if you want the acclaim of your fellow workers you will have to look at what they consider good and hope they did not fall into the ”if its popular it must be good” trap also known as ”Eat dung, forty million flies cannot be wrong”. Here too you may end up pretending to yourself that you like a language or technology that you hate and does not suit your personality. 

What does ”better” mean? 

This question has no right or wrong answer. As an employee ”better” could mean adhering more to company standards and producing maintainable low bug rate code fast. As a contractor it could mean delivering and not leaving a code smell behind. In either case issues of security, readability and scalability are important. Knowing when to trade one of these attributes off against another is a sign you are progressing. 

A professional musician, Classical or Jazz will specialise in one instrument. Few will tackle more than one instrument, though some Rock musicians are competent with several instruments while having one in which they are best. It is rare however to find a musician who has mastered Violin, Saxophone and Drums.
As a developer you can choose your main language and go deeply into it, plus one or two other languages which will shed different light on the problems you solve. In this context a better programmer understands their chosen tools more deeply as time progresses but experiments with other tools from time to time. 
The bottom line is you will become a better programmer if you deep dive into one language, though you may have to become a full stack developer as an employee and almost certainly if you start your own business. The bonus is that flirting with other languages will help you improve more by teaching you more about computing, just as a professional violinist may pick up something about music from playing the drums. 

Do things you cannot do. 
Don’t over reach but try things that scare you or you know nothing about. 

Train to your weaknesses: Security, Concurrency and Cryptography come to mind. 

Cryptography is hard and you will spend more time studying than coding and there are lots of pitfalls even implementing a secure algorithm. Everything you do that you could not do before makes you a better programmer. 

If you cannot think of anything new and innovative revisit your old code, solved problems and refactor, think how it could be done differently; for example Java 8 will allow a radical simplification of code, at least until the ”more complex is better” crowd get in on the act. 

Consider rewriting old solutions in a new language, if possible. Or think of three of four other ways you could have solved the problem. 

Another neglected way to improve is to teach. Teach children or adults with no computer experience. When one person teaches another two people learn. It could be a humbling experience but satisfying if your students overtake you and vanish into the stratosphere.

Wrapping it up
Becoming a better programmer means different things to different people. Before trying to improve examine your goals and motives and develop an action plan. Do things you cannot do, simplify code and design, redo things differently, preferably in several ways, and teach others.

Finally make sure you have a lot of time away from the keyboard. The best ideas and solutions come at the bus stop, at 3am (keep pen and paper handy) or in the bath (Train your memory). Make sure your mind as well as your body is away from the keyboard. This time is for incubation of ideas.

By the way:  if you dream you are coding you need a long break.

Thursday, 16 June 2016

Artificial Intelligence and The death of the software profession

Within our working lifetime we will be singing ”AI Killed the Coding Star”

Corporate consumer capitalism production (CCCP) imperatives will drive the adoption of software that can code to a specification and learn from experience. As AI evolves such programs will be tasked with improving themselves till they can accept and refine ambiguous and imprecise requirements from humans, determine the functional specification and architecture then produce a finished tested system. Finally AI systems will be able to decide business objectives, strategy and tactics better than humans. At this point humans will be redundant, at least in large corporations. 

The Death of the Software Engineer
Today’s software industry
The imperatives of corporate capitalism require producing more and more faster and faster with less and less.
In the digital technology domain the response has been to automate as much as possible, introduce methodologies such as Agile and Kanban and as in Slave Plantations of the 18th and 19th centuries, separate conception from planning and planning from execution, separate support functions from execution and delivery functions and organise these areas so that delivery of one project can proceed while planning for the next is proceeding and the next but one feature is being conceived. Like the ideal plantation this has the forward impetus of a mighty machine: One can hear the wheels turning. And, like the organisation of plantations and factories, it is very very efficient, regardless of how it may seem at the workface.
There are differences from plantations: Software development requires at least a minimal amount of thought from a developer though Agile planning is often implemented in a way that turns coding into a relatively mindless process, assuming the developer has some knowledge of the technologies they are using. This is like teaching a slave to use a slightly temperamental machine. Developers are thus infantilised less than slaves but the baby language of user stories suggests not by much.

Goodbye Developers
Development of AI threatens to remove the developer from the picture entirely. Consider a system that can receive a specification from a human, parse it and generate code, test the code and then deploy it. A reward system can be built in that motivates the system to produce an optimal trade off between various metrics such as bug count, performance and memory footprint.
It need not be as good as a human as long as it can learn. A system that does all this, far faster than you can read let alone type and works 24/7/365 with no diminution of energy, has a perfect memory and never makes the same mistake twice. Even if initially you are a better programmer how long would your superiority last? And what if the requirement is to improve itself?
Such an AI would be the perfect employee, It could not quit, it learns but does not rock the boat, it does not think inconveniently outside the box, it needs no sleep, pension or holidays, spends no time in meetings and, if it continually backs itself up, it will not go sick since the backup(s) will seamlessly take over. Humans will no longer be needed for this phase of software development. Good bye entry level developers and a large number of grunts happy to plough the same furrow (J2EE, Spring Etc) deeper. This was predicted a while ago And goodbye security engineers as other AIs will continuously try to find ways to penetrate the code then find ways to make it more secure.
It is only a matter of time before this and more happens. But when the exponential curve of development takes off it will be fast. Not lightning fast as corporate inertia, unwillingness to invest and unwillingness to write off legacy code for as long as possible will slow it down. Some may even make a virtue
of using humans, just like the brewery that advertised it was using the original traditional Victorian equipment with which it started: Till it burned down and then it advertised it was using the latest technology. This ’functional stupidity’ will delay the adoption of AI, though the first corporation to market such an AI will make a killing. Till one customer works out how to use it to make their own better version without infringing patents. 
Developers Regroup
My first manager ever used to say
If they make a little black box that does my job I’ll get a job making little black boxes
Developers will initially move to developing specifications for the code generators to use. Cue the Open Source Specification Generator. The next generation of AI will be able to take an imprecise high level specification, parse it for ambiguities and inconsistencies, ask questions, understand the answers, generate its own functional specification, implement it and present the ( possibly human) customer with the result for approval.
This does not need true AI: research in languages will improve detection of ambiguities and natural language research will improve the way the system communicates with humans. Humans may still be needed to work in areas where the problem may not be understood well. Again that will not last.
End Game AI Wins
In the end AI will eliminate human developers. Its domain will extend to architecture then eliminate large parts of IT support. When robots can unplug one component and plug in another there will be no need for humans at all. Line managers will have gone long ago, along with office politics and only the Business side will remain. Even Business Analysts will become digital. All that will be left is executives deciding the future trajectory of the business.
But even the CEO’s job is at risk. In the end a corporation will comprise machines producing goods and services for humans and other machines and
probably producing other machines. This will be the corporate wet dream: marginal production cost and no employee costs.
And it could be a nightmare as indicated in ?? Except for one little thing. Who will have the money to pay for these goods and services?
AI will by then have eliminated humans from a large number of industries and professions: driverless cars will have eliminated transport industry jobs, AI will have rendered doctors little more than the biological interface with the patient, robots will be carrying out surgery: Dentists may last a bit longer, Administrators will be defunct as their roles are routine and can be automated easily. Lawyers will be reduced to information gatherers and so on. 
Humanity’s response
A computer program can beat any human. A GO program is approaching the same level. But people still play chess and GO, perhaps, at the highest level with the aid of computers: people process these games in a different way from machines: see ??
Programming would be a hobby for some, a competitive sport for others and perhaps an art form, especially if new languages to enable this were developed: see ??
Or there could be another less pleasant scenario
The 0.1% who own all the wealth retreat into gated communities taking some servants with them leaving the rest to fend as well as they can without computers or the knowledge of how to find food. Outside the gated communities life becomes hellish and brutal for an indefinite period of time. Millions die. Inside the communities there are dictatorships and progress stagnates as the rulers seek to maintain the status quo and their power and luxury.
At the moment we are on a path that will mean the programmer becomes extinct. People who can communicate and work with strong AIs and preserve their sanity will be needed but they will be a minorith and coding will die as a skill unless preserved the way some calligraphers try to preserve the art of handwriting.
Most likely however the future will proceed in a way that is unforeseen at present.
Further reading
Superintelligence: Paths, Dangers, Strategies Paperback: Nick Bostrom, OUP Oxford 2016. ISBN-10: 0198739834

Thursday, 9 June 2016

Blogging With Latex

LaTex has some advantages for writers, but is not directly suited for blogging. The benefits come at the cost of a learning curve. Using Latex for blogging requires some technical knowledge unless a  LaTex editor is used and a slightly more complex publishing process.
The results of an initial experiment (this post) suggest it is worth persevering with Latex for blogging.
The author has for years prepared content off line using OpenOffice on OS X pasting the content into an online editor. One problem has been handling cross references, which OpenOffice displays in a fashion that is distracting, aesthetically displeasing and slows down proofreading and review.
LaTex  handles references more intuitively than OpenOffice (your mileage may vary) and the results can be pasted from the directly into an online editor.
Another advantage is that Latex allows embedding of metadata such as key- words, publication date and where appropriate invoicing data in the document as comments while not affecting the output document.
These advantages made it worthwhile experimenting with Latex for blogging.
One disadvantage is that Latex is very creative in err... creating files that should ideally be dumped in an appropriately named directory until they are needed which necessitates fairly strong disk clutter hygiene to become habitual
1 Requirements
A simple workflow is a paramount requirement.
Since LaTex takes care of formatting writers can concentrate on the words and formatting was preserved when content was pasted into the online editor. It proved enough to generate a PDF file from the source file and past the con- tent online. This is marginally more complicated than preparing content in OpenOffice and pasting it online. Occasional problems with justification and formatting in Open Office are avoided this way. Where required an Open Office document can be produced directly from the Latex source. Generally speaking an OpenOffice document will be needed only if the document has to be sent to someone, for example a publisher.
Latex users often handle references using BibTex. For blog posts this is a nuisance: references should be kept within the document thus avoiding disc clutter.
2 Solution
The solution was to use pdflatex in Texshop to generate a PDF file. For rapid processing of files that are almost completed and require only small changes a shell script was developed. The shell script hides the auxiliary files Latex produces, whereas TexShop does not and indeed produces more disc clutter.
Other editors such as Lyx and TextWrangler proved unsatisfactory. Lyx has a cluttered and counter intuitive interface and TextWrangler’s soft wrap facility did not  seem to work on OS X.
A LaTex command to handle references was developed adapting code suggested in a Stack Overflow post. At present this is stored in a known location and accessed using the \input command in the verbatim package.
3 Conclusions
Although one study [ Knauff M, Nejasmic J (2014) ] has shown that producing simple text ( that is text not needing mathematical or other complex notation) using LaTex is slower than a WYSIWYG word processor when adjusted for experience, writing proved more pleasant because of the uncluttered interface TexShop provides. This study does not match the experience of composing text rather then simply transcribing it, which might be more efficiently handled using dictation software and, subjectively, there was little impact on the time needed to produce the content.
The publishing process however proved a bit more complex. It remains to be seen whether the benefits justify this.
4 Further reading
Knauff M, Nejasmic J (2014) An Efficiency Comparison of Document Prepa- ration Systems Used in Academic Research and Development. PLoS ONE 9(12): e115069. doi:10.1371/journal.pone.0115069
LaTeX versus WYSYWIG:Usability, security and enjoyability 
5 Command to handle references
\newcommand{\sourcenamestyle}[1]{ {#1}  }
\newcommand{\sourcenamerefstyle}[1]{ [{#1}]  }
\newcommand{\source}[1]{  \paragraph{}
6 Shell script to create PDF file
# Create appropriate log folder
export LOG_FOLDER=<Path to log folder>/$1
# Create a pdf file  in the log folder
pdflatex -output-directory=$LOG_FOLDER $1.tex
# copy the pdf to the current working directory
cp  $LOG_FOLDER/$1.pdf  .
#  open the pdf in Preview.
open -a Preview.app $1.pdf

Saturday, 4 June 2016

Capitalism, Slavery, Modern Management and Software Development

Cotton Planter and his pickers 1905
Capitalism is in denial over Plantation slavery and its role in present practice. These are perhaps most clearly seen in the software industry.

Slavery was a capitalist enterprise that gave birth to a number of modern management practices which arose as slave owners tried to maximise return from human capital using devices such as teams and infantilisation of slaves. It was an early example of neoliberalism with the market taking precedence over morality and religion.

Slavery's management practices persist in many areas of business, perhaps most prominently in the digital technology industry, especially software development. Teams have replaced field gangs and management seeks to subvert nominally empowering methodologies like Agile to turn programmers from artisans to work gang members by separating conception from execution, infanitilising developers and turning programming into an assembly line, thus rendering software development joyless to those who recall the days before the dotCom bubble. 
These practices survive because of technical, elite and political motivations for management. Management is (theoretically) a response to increasing organisational complexity, provides benefits for a managerial elite and, politically, provides a way to enforce discipline on workers. Management as a role appeals to the love of power in us all (political). As a career it gives entry to a perceived elite and generates then controls organisational complexity (technical)

The methods and practices of Slavery have survived but adopted subtler forms and become cultural norms that, like a chameleon are invisible until seen.

Examples, in today's software industry, of practices originating in Slavery are given.This is not to criticise the software industry in particular: it is just the industry with which the author is most familiar.

Modern software development methodologies such as Agile provide a way to manage complex development tasks, nominally empowering a development team infantilising its members. Teams are the analogue of the plantation Gangs, with architects perhaps taking the role of artisans and managers the roles of overseers (foremen in factories)

Slavery was Capitalism and slaves were capital.

Slaves were capital and slavery was capitalism.
This poster shows slaves were capital assets

Slavery is the skeleton in the cupboard of Capitalism and modern business culture but like a domineering patriarch its influence lingers well past its official death. Business is keen to disavow Slavery despite inheriting many of its methods. The whip has been replaced by more subtle controls such as peer pressure and the real programmer syndrome while Slavery''s practices and their underlying ideologies have, like oxygen become invisible.

One way to exclude capitalism from Slavery is to say that Wage Labour is a necessary defining characteristic of Capitalism, that plantations did not use paid labour and thus could not be capitalist Enterprises.

If Wage Labour is a defining feature of Capitalism then any one person business is not Capitalist since they are not paying anyone wages. Similarly a Farmer or Landowner who rents out property ( i,e capital) would not be a Capitalist.

Slaves were capital and Plantation Management tried to extract maximal returns from human capital. Plantations were therefore capitalist. In the early days of slavery when slaves could be worked to death and replaced fairly cheaply they were closer to human resources than human capital.

Plantation Management was modern management.

Cooke [1] describes three tests for an activity to be considered modern management
The model of a morden model manager .

  1. It must be carried out in a capitalist System
  2. It must involve activities that surpass a certain ( undefined) level of sophistication
  3. There must be a distinct group of people, described as managers, who carry out these activities

Since Slaves were Capital and used to achieve maximal return on investment, Slavery was a capitalist system.

Management of a plantation was a sophisticated affair involving complex divisions of many labour and techniques to control unwilling labour, namely slaves.

The manager role was carried out by overseers to whom the owner delegated control of a plantation. There were few salaried managers but many slave overseers. Overseers constituted a managerial class that controlled but did not, own the enterprise.

Plantation Management was Modern Management.

Modern management practices originating from Slavery

The capitalist pyramid or chain of command


Slave owners were concerned with profits and used advanced accounting techniques such as depreciation more consistently than many of the contemporary northern factories, often considered the birthplace of modern management. In some ways Slavery allowed a more scientific approach than the factories did since slaves could not quit and the owner could monitor them more closely than free labour, which would just have quit.
Many plantations used a standard accounting system described in Thomas Affleck’s Plantation Record and Account Books which contained advanced techniques, including how to calculate depreciation. By the 1840s planters were depreciating their slaves: Appraising their inventory at market value comparing with past market value to assess appreciation or depreciation, calculating an allowance for interest, and using this to work out their capital costs. And all by hand. Slaves were assets and exceeded the value of all other assets. The owners could have said “people are our greatest assets” but did not regard slaves as people.
Slaveholders developed a value unit called “the prime field hand.” Today's equivalent is the man-day or for machines, Horsepower. They defined the prime hand, by criteria such as expected production per day. Workers were measured against this standard and given values such as “half hand” and “quarter hand.” and this was a standard known between plantations. If a slaveholder said he had 13 hands who were the equivalent of 10 prime hands, other slaveholders would have known exactly what that meant.
A plantation was a complex enterprise with complex division of labour. Thus [1] a Virginia estate of six plantations, one visitor noted, had six overseers (managers), a general agent (director) and “staff” employees covering a traditional managerial trinity - financial resources (a book-keeper) literal human resources (two physicians and a preacher) and plant (a head carpenter, a tinner and a ditcher). Today the plant would be an architecture group and IT support. The visitor added “Every thing moves on systematically, and with the discipline of a regular trained army”. This is management's wet dream.
Managerial techniques included the application of scientific method, the selecting the best person for the job, monitoring performance, sophisticated organizational rules, a chain of command, team spirit, analysis of the maximum number of people an overseer could properly control, attempts to instil discipline and separation of conception from execution.

The Plantation as a Rational Machine

The inhumanity of plantations, factories and indeed the modern office arise from a view of the Enterprise as a machine, or at best a factory, rather than a community. This view may not have originated with slavery but was definitely present at that time. Cooke[1] confirms this with a quotation from Bennet Barrow’s Highland plantation rules: “A plantation might be considered as a piece of machinery. To operate successfully all its parts should be uniform and exact, and its impelling force regular and steady.”

This sentiment is echoed in modern software development: all members in a team, especially an agile team, should be interchangeable and velocity must be maintained and increased, generally at the expense of creativity and innovation and, in a depressingly large number of companies, with no concern for the workers.

Industrial Discipline

The plantation-factory-machine required military level discipline from its biological
Management is a bit more subtle these days.
components. The pace had, as with software development, to be fast and predictable and was never enough for the owners.

Slaves were obviously not focussed on surprising and delighting the customer and would resist efforts to make them work harder. As in the Stanford Prison Experiment force would work but would damage profits if kept up indefinitely so other tricks had to be used. Most of these can be identified in the later practices of Scientific management or Taylorism.
One tactic used was the Gang system which split slaves into teams of 10 to 20 people ( the modern consensus is that 9 people is about the maximum size of an efficient Agile team) . Every member had a defined task, and there could be sub-gangs for different tasks, and depended on the actions of other members. Work was organised to foster tensions and dependencies between gangs. The Gangs also developed a Team Spirit in which effort and commitment for the team, teamwork, was manipulated for slave owners ends.
Thus in a field gang when planting the fastest workers would run ahead and dig holes, the slowest would run behind and drop seed in the hole, and an in-between group would follow behind and cover the seeds.
The modern software development team may be considered as the equivalent of the Gang, the line manager taking the role of overseer and the team largely self policing. Team members are treated as commodities with the notion that the members should be interchangeable.

Separating Conception from Execution
Agile: don't think, just move faster

The Modern enterprise has senior management, like plantation owners and General agents, tasked with deciding what needs to be done plus originality and innovation, line managers tasked with delivery, lead developers tasked with planning and execution and front line developers who, like field hands, are tasked with delivery.
Separating conception from execution is part of Taylorist Scientific Management. Since elites have long realised that letting underlings think is dangerous ( one of the driving forces behind the decades long ruination of Education in the UK and UK and increased focus on Training) underlings have to be trained not to think, indeed they must be infantilised. The baby language of Agile user stories serves this function admirably.
Separation of conception from execution with managers doing all the thinking, in increasing detail as orders proceed down the chain of command, requires the worker not to think. At the developer level all that is needed is the ability to code to order and write tests. The manager and leads run ahead determining everything, the junior members fill in the code to order and the testing team run behind ensuring all is well. Then the code is loaded to the production truck and delivered to the customer.


Slaves waiting for sale: any resemblance to a Job centre is purely coincidental

Th modern manager who insists workers must
be interchangable ( and therefore faceless)
Modern Capitalism started largely with Slavery and gave birth to management practices that persist today. The Whip has been replaced with the PIP and teams replaced the Gangs. Teamwork and teamplayers are the modern mantra. Software development methodologies such as Agile which are intended to empower developers have been used to reduce teams to field gangs with team spirit and peer pressure making the job of the overseer/foreman now known as the manager, easier and exploiting team members care for each other to business ends.

Reducing skilled workers to digital field hands. Along time goal of management, seems to underlie much of the dysfunctionality of modern business. Like slavery these practices are very profitable, except to the workers, but come at a considerable human and social cost.

Research is needed to ensure that business can be profitable at minimal human cost. And to ensure management and business tools do not separate us from our humanity, as happened in the Holocaust when management techniques allowed camp commandants and others to forger they were dealing with humans.

Of course this may become redundant if AI eliminates the need for work as well as for workers to earn the money to pay for the goods the AI produces. 



  1. THE DENIAL OF SLAVERY IN MANAGEMENT STUDIES, Bill Cooke University of Manchester, IDPM DISCUSSION PAPER SERIES Paper No. 68, July 2002 available from http://idpm.man.ac.uk
  2. https://hbr.org/2013/09/plantations-practiced-modern-management Plantations practiced modern management Note that this research focussed on American Slavery and factories, There is good research to be done looking at the influence of British practices.
  3. A Renegade history Of the United States: Thaddeus Russell,Simon & Schuster 2011 ISBN 9781416576136
  4. http://www.topmanagementdegrees.com/slave-management/ The eerie similarities of slavery management practices to modern business
  5. http://www.prisonexp.org/ The Stanford Prison Experiment by Zimbardo
  6. Capitalism and Slavery, Erik Williams, University of North Carolina press 1944
  7. http://www.topmanagementdegrees.com/slave-management/ Slave owners versus modern management: can you tell the difference.

Wednesday, 18 May 2016

The Software Developer Shortage Myth: a confidence trick repeated

The current developer shortage myth is merely a repeat of the 1960s myth of the shortage of technologists in the UK as the brightest minds, not being stupid, realised the best life was to be found outside the UK. There is evidence that the shortage myth is being manipulated to justify outsourcing of software production.

The Brain Drain and the White Heat of technology

In the 1960s the media panicked about the brain drain: British Scientists and Engineers emigrating to somewhere where the view that scientists should make everything except money did not hold sway. At the same time the great and good (lawyers, accountants, politicians etc) claimed Britain needed more Scientists and Engineers and referred to the transformative effects of the “White Heat of Technology”. Persuading young people to enter a career in STEM (Science, Technology, Engineering and Medicine) was a great way to prevent bright minds outside establishment families from aspiring to the positions of power and influence held by those calling for more people to enter STEM. Whether the country needed more STEM it seemed clear that industry and commerce wanted STEM the way they wanted the proverbial cranial aperture

Fast forward 50 years and there is an alleged shortage of software developers. When evidence to the contrary is presented it is dismissed or spun as a shortage of “good” developers - “Good” as in “What Business says it wants”.

In 1948 London Transport imported West Indians to do the jobs British people allegedly were unwilling to do at a wage the company was willing to offer. This eventually led to race riots in 1958. These were, rightly, suppressed. In the late 20th and early 21st century the Internet allowed outsourcing of many types of work to developing countries. Since this did not bring “immigrants” into Britain to “steal jobs”, always an incorrect term since jobs are offered not stolen, it was almost impossible that the job related race riots of 1918 and those of 1958 would be repeated

Today anecdotal evidence is being spun into a mythical shortage of software developers to justify outsourcing development to cheaper countries.

Even if there is a shortage of developers, the current (2016) drive to increase the number of people who can program will, by virtue of the law of supply and demand, polarise the software industry into a small number of near genius level digital technologists, a large number of commodity developers doing routine work and a few “seniors” and “leads” in between herding those at the bottom.

As in the 1960s, the public are being deceived into trying to enter STEM, today with a focus on digital technology, in order to benefit the “elite” of society.

There is no developer shortage

At the time of writing layoffs in Silicon valley supposedly the place where the demand for Software engineers is greatest, have doubled. Even allowing that the supply there exceeds the local demand, an excess of software talent in the supposed hub of innovation is surprising.

Also at the time of writing there has been a fall in the number of permanent technology opportunities in the UK but a corresponding rise in contract opportunities and, according to Cvtrumpet, competition for job vacancies is increasing as the number of employees who are looking for a new job has reached its highest level since autumn 2013.

This may reflect uncertainty over Britain's membership of the EU plus more generalised anxieties.

Developer salaries have not risen neither in real terms nor, it seems, in absolute terms, at least in the UK. London has the highest salaries and the employers least likely to compromise on their requirements, but London salaries reflect the cost of living there and are not qualitatively different elsewhere. Ten years ago an entry level developer could expect to command about £30,000 a year. Today entry level developers can command between £25,000 and £32,000 a year. This is higher than most non programming jobs but advancing beyond this can be difficult without job hopping, and requirements are more stringent: an increasing number of entry level jobs require a degree in computer science, and in practice this will mean an upper second or a first.

The most telling evidence against a developer shortage is that every advertisement for a developer job attracts several hundred responses. This level of response has led to automated CV scanning and rejection by software that simply ticks boxes: This saves the time of people in HR who used to tick boxes manually, but can be overkill: Some years ago a company received 25,000 applications for a run of the mill job and their scanning software rejected every one. Companies may have an exaggerated view of what such software can do, but the fact automation is needed to scan applicants strongly suggests an abundance of candidates. Against that is the increasing ease of making an application, though this trend is not confined to programming jobs.

But could it be that there is a shortage of “Good” developers?

There is no shortage of “Good” developers: Good is whatever business wants it to mean

A standard response to the question “is there a shortage of developers” is “NO, there is a shortage of good developers”.

The problem here is defining a “Good” developer.

At least one employer has specified candidates must have graduated from one of the top ten universities in the world. This is obviously someone's desire to boost their ego and must be mentioned then ignored.

The various definitions of “good” but they seem all to relate to what business says it wants. Ignoring experience and technical skills this seems to boil down to a love for problem solving, an analytic mindset, a passion for learning, a desire to improve and ability to work on a team.

Solving problems is easier than deciding what problems to solve: this meta problem is normally reserved for managers. An analytic mindset tends to conflict with creativity, another function often reserved to managers. A passion for learning is often reduced to “Passion for technology” with technology meaning programming, and a desire to improve is allowed as long as it does not threaten the power structures in the company. Ability to work in a team usually good but tends to reduce the best to the average and managers tend to distrust anyone who is willing to work on a cross team basis.

Once upon a time programmers rejected corporate values and produced remarkable things. Today programmers embrace corporate values and tweak existing products. Innovative products today are produced by those who have never been in a corporate environment or have left it in order to do their own thing. Unfortunately when people like this succeed they usually form a corporation and create a corporate culture just like everywhere else.

Business defines good developers as what they say it needs, a definition that could well exclude many pioneers of software development, and the developer community has embraced this definition. What business NEEDS may be neither what it says or thinks it needs or what it actually desires (compliant machine like employees who do not rock the boat but meet deadlines).

Why does business claim there is a shortage of developers?

The developer shortage myth allows manufactures to lobby for outsourcing

It takes years to learn digital skills.
The most cynical, and therefore the answer most likely correct answer, is that employers want an excuse to outsource work to developing countries and thereby reduce wage costs.

Fortune Magazine [1] looked at an analysis of the US H1-B visa system and found that only 30is% of the top ten requested positions when applying for an H1-B Visa lacked enough qualified American jobseekers to supply demand. Also it was common for employers to write requirements so narrowly that only one temporary overseas worker would fit. The pattern for this is a Jobserve advertisement early in the 200s which required a software developer who was a “ Graduate of an Indian university, fluent in an Indian Native language and had a complete understanding of Indian Culture”. The Fortune article does imply that employers are abusing the H1-B system.

In the UK in the early 2000s outsourcing proved less popular than actually importing workers from developing countries and exploiting them mercilessly. Unlike 1918 this did not provoke race riots. Some contractors on the shout99 website noted they had to spend half their contract training inexperienced overseas developers to replace them. One contractor reported workers employed on £35 a day and replaced regularly so as not to violate restrictions on time spent in the UK. At least one other company in Mainland Europe (which I will not name) did the same transferring workers from its overseas branches to a branch in Europe.

In brief the shortage myth was created to lower costs by importing lower paid workers keen to work in the US or by outsourcing development.


Current trends suggest there is no shortage of developers. Layoffs in Silicon Valley are increasing, software salaries are not increasing, employer requirements are becoming more stringent and the response to an advertisement for developers is so great that automated CV scanning is needed.

Long experience suggests the “shortage” is a myth designed to justify cost reduction by outsourcing and that the drive to increase the number of coders is a confidence trick that will ultimately result in shunning of STEM by the next generation.

If there ever was a shortage of developers it no longer exists.

The most likely resolution of the developer “shortage” will be the development of specialised AI that can produce code from a rough spoken or typed specification, perhaps learning from its mistakes. At that point the entire software industry will be dead, or at least meat free.