Sunday, 13 March 2016

Athens regained?

Planting season is now in full swing, so it is far from certain I will be get to the BCS talk this week on AI.  That is a disappointment for me as I was working in AI in the 1990's and was an external supervisor for a couple of Ph.D. candidates.

The topic has been more prominent in the press the last year than it has for some time.  I have been collecting  articles and snippets.  It is not uncommon to see heightened press coverage of  paradigm changing technology when the economy is in difficulty.  It is the sort of jam tomorrow promise that those who are dazzled by the delights of growth cling to when recession  abounds.   But there is real potential here, it isn't just vapourware.

One of the reasons the various techniques of AI were very useful when I was working in the field still pertain.  It can quickly identify a good enough answer to an otherwise unsolvable problem.  That is often the case in scheduling and timetabling  where they may be no solution that satisfies all constraints.  It is certain the case in aspects of fluid mechanics (which is where I was applying AI) since the fundamental Navier-Stokes equation has not been solved mathematically.

Of course things have changed over a quarter of a century.  Certainly machine learning has advanced significantly.    Go is a game that has too many possibilities to be practically determinable, so the fact the Google's DeepMind AlphaGo recently could beat the world champion demonstrates that.  The machine learnt  by playing against itself.

I doubt anyone has much in the way of reservation about AI being used to solve such otherwise difficult to impossible problems. The worries come from a different branch of the discipline - autonomous behaviour and control systems.  This is where we have started to see human like presentation and behaviours from machines.  There is even an hotel in Japan effectively run by human like automata.  (I dislike calling them robots, because we've had robotics for many years on factory floors  - they are brilliant but they are not what most people imagine when referring to robots).

The issue is when we defer executive action to machines. In effect we surrender control. Some of the systems I worked on were presented quite deliberately as decision support systems.  There was no executive capability, simply information and advice to human  operators.  Hopefully faster and more reliable and condensed  information, but nonetheless  just advisory.  

As we approach the point where autonomous units can carry out mundane, repetitive  tasks reliably we have to ask some very serious political and ethical questions.  If we ignore the transition phase for now, it is possible to envisage a marvellous future.  Imagine a world without drudgery where production if delegated to autonomous entities.  Sounds good, but it is very dangerous.  How do the ordinary people then earn a living? If labour is insignificantly cheap (as it would be with efficient capable AI units), then the  only value is in control of resource or in artistic and intellectual endeavours.  

Humanity has been here before on occasions; just  substitute slaves for the AI entities. In ancient Athens, citizens (not women, slaves or foreigners) fought in the army and were expected to participate in policy formulation.  Citizens were paid to attend the forum, so even the poorest were not excluded.   There are other  models of slave based societies of course, like the Romans.  They were so dependent that when it was proposed they have a uniform in the city for slaves, it was defeated by pointing out that slave would realise how numerous they were and revolt would surely follow.


  1. Of course things have changed over a quarter of a century. Certainly machine learning has advanced significantly.

    ...but it still can't translate simple unambiguous German into grammatical English, much less cope with nuance or idiom.

    1. So that probably puts the machine on a par with my schoolboy German.

      This might be of interest... AI crossword-solving application could make machines better at understanding language

  2. Think we are a long way off Star Trek's Data!

    The main problem with AI is teleological. Where does it get purpose from? Human beings create values, but they are also have purposes defined by evolutionary drives: survival, propagation of species. But we have mixed values, which leads to choices being made and deliberated, sometimes counter to evolutionary drives. For instance, contraception and falling birth rates are a counter-evolutionary choice.

    Where do AI systems get their directives from? That is a question which has been explored by science fiction writers for many years, usually with rather bad consequences for us! AI systems are relatively simple compared to biological systems, and do not have the complexity that comes with evolution.

    Currently AI systems are limited to learned choices within constraints. In other words, they can make intelligent decisions but within a limited framework, rather like a diagnostic online health question and answer system. They are not “open ended”.

    And even then there are problems. Google’s driverless car had a prang when it “assumed” a bus would slow and let it out. But human behaviour has an innate cussedness, as my American friends would say, and the bus didn’t stop. The Google car hit it. Did th systems learn from the mistake? Perhaps, but would you want to be in a car at 60 mph making a mistake? Or a plane?

  3. Where does it get purpose from? Thats the big question. If evolutio is right it must be emergent somewhere, and that means machines might be able to derive purpose for themselves. For a creationist it is a different issue -it is only there if the creator put it there. At least until intelligent machines start building machines more capable than themselves, which is on the cusp of happening.