Stephen Smith's Blog

Musings on Machine Learning…

Archive for May 2017

Intelligence Through Emergent Behaviour – Part 1

with 2 comments

Introduction

One of the arguments against Strong AI relates to how computers can somehow break out of their programming to be creative. Or how can you program a computer to be self-aware? The argument is usually along the lines that AIs are typically programmed with a lot of linear algebra (matrix operations) to form Neural Networks, or are programmed as lots of if statements like in Random Forests. These seem like very predetermined operations and how can they ever produce anything creative or beyond what it is initially trained to do?

This article is going to look at how fairly simply defined systems can produce remarkably complex behaviours that go way beyond what would be imagined. This study started with the mathematical analysis of how physical systems can start with very simple behaviour and as more energy is added their behaviour becomes more and more complex resulting in what appears to be pure chaotic behaviour. But these studies show there is a lot of structure in that chaos and that this structure is quite stable.

The arguments used against Strong AI, also apply to the human brain which consists of billions of fairly simple elements, namely our neurons that somehow each perform a fairly simple operation, yet combined yield our very complex human behaviour. This also can be used to explain the spectrum of intelligence as you go up the evolutionary chain from fairly simple animals to mammals to primates to humans.

Taylor Couette Flow

Taylor Couette Flow is from fluid mechanics where you have an experiment of water between two cylinders. Fluid mechanics may seem far away from AI, but this is one of my favourite examples of the transition from simple to complex behaviour since it’s what I wrote my master’s thesis on long ago (plus there really is a certain inter-connectedness of all things).

Consider the outer cylinder stationary and the inner cylinder rotating.

At slow speeds the fluid close to the inner cylinder will move at the speed of that cylinder and the fluid next to the outer cylinder will be stationary. Then the fluid speed will be linear between the two to give nice simple non-turbulent flow. The motion of the fluid in this experiment is governed by the Navier-Stokes equations, which generally can’t be exactly solved, but in this case it can be shown that for slow speeds this is the solution and that this solution is unique and stable (to solve the equations you have to assume the cylinders are infinitely long to avoid end effects). Stable means that if you perturb the flow then it will return to this solution after a period of time (ie if you mix it up with a spoon, it will settle down again to this simple flow).

As you speed up the inner cylinder, at some point centrifugal force will become sufficient to cause fluid to flow outward from the inner cylinder and fluid to then flow inward to fill the gap. What is observed are called Taylor cells where the fluid forms what looks like cylinders of flow.

Again the Navier Stokes equations are solvable and we can show that now we have two new stable solutions (the second being the rotation is in the opposite direction) and that the original linear solution, although it still exists is no longer stable. We call this a bifurcation, where we vary a parameter and new solutions to the differential equations appear.

As we increase the speed of this inner cylinder, we will have further bifurcations where more and more smaller faster spinning Taylor cells appear and the previous solutions become unstable. But past a certain point the structure changes again and we start getting new phenomena, for instance waves appearing.

And as we keep progressing we get more and more complicated patterns appearing.

But an interesting property is that the overall macro-structure of these flow is stable, meaning if we stir it with a spoon, after it settles down it will appear the same at the macro level, indicating this isn’t total random chaotic behaviour but that there is a lot of high level structure to this very complicated fluid flow. It can be shown that often these stable macro-structures in fact have a fractal structure, in which case we call them strange attractors.

This behaviour is very common in differential equations and dynamical systems where you vary a parameter (in our case the speed of the inner cylinder).

If you are interested in some YouTube videos of Taylor Couette flow, have a look here or here.

What Does This Have to Do with Intelligence?

OK, this is all very interesting, but what does it have to do with intelligence? The point is that the Taylor Couette experiment is a very simple physical system that can produce amazing complexity. Brains consist of billions of very simple neurons and computers consist of billions of very simple transistor logic gates. If a simple system like Taylor Couette flow can produce such complexity then what is the potential for complexity beyond our understanding in something as complicated as the brain or computers?

In the next article we’ll look at how we see this same behaviour of complexity out of simplicity in computer programs to start see how this can lead to intelligent behaviour.

 

 

Advertisements

Written by smist08

May 22, 2017 at 6:03 pm

The Road to Strong AI

with 2 comments

Introduction

There have been a great many strides in the field of Artificial Intelligence (AI) lately, with self driving cars becoming a reality, computers now routinely beating human masters at chess and go, computers accurately recognizing speech and even providing real time translation between languages. We have digital assistants like Alexa or Siri.

This article expands on some ideas in my previous article: “Some Thoughts on Artificial Intelligence”. This article is a little over a year old and I thought I might want to expand on some of the ideas here. Mostly since I’ve been reading quite a few articles and books recently that claim this is all impossible and true machine intelligence will never happen. I think there is always a large number of people that argue that anything that hasn’t happened is impossible, after all there were a large number of people still believed human flight was impossible after the Wright brothers actually did fly and for that matter it’s amazing how many people still believe the world is flat. Covering this all in one article is too much, so I’ll start with an overview this time and then expand on some of the topics in future articles.

The Quest for Strong AI

Strong AI or Artificial General Intelligence usually refers to the goal of producing a true intelligence with consciousness, self awareness and any other cognitive functions that a human processes. This is the form of AI you typically see in Science Fiction movies. Weak AI refers to solving narrow tasks and to appear intelligent at doing them. Weak AI was what you you are typically seeing with computers playing Go or Chess, self driving cars or machine pattern recognition. For practical purposes weak AI research is proving to solve all sorts of common problems and there are a great many algorithms that contribute to making this work well.

At this point Strong AI tends to be more a topic for research, but at the same time many companies are working hard on this, but often we suspect in highly secretive labs.

Where is AI Today?

A lot of AI researchers and practitioners today consider themselves working on modules that will later be connected to build a much larger whole. Perhaps a good model for this are the current self driving cars where people are working on all sorts of individual components, like vision recognition, radar interpretation, choice of what to do next, interpreting feedback from the last action. All of these modules are then connected up to form the whole. A self driving car makes a good model of what could be accomplished this way, but note that I don’t think anyone would say a self driving car has any sort of self awareness or consciousness, even to the level of say a cat or dog.

Researchers today in strong AI are building individual components, for instance good visual pattern recognition that use algorithms very similar to how neurologists have determined the visual cortex in the brain work. Then they are putting these components together on a “bus” and getting them to work together. At this point they are developing more and more modules, but they are still really working in the weak AI world and haven’t figured out quite how to make the jump to strong AI.

The Case Against Strong AI

There have been quite a few books recently about why strong AI is impossible, usually arguing that the brain isn’t a computer, that it is something else. Let’s have a look at some of these arguments.

This argument takes a few different forms. One compares the brain to a typical von Neumann architecture computer, and I think it’s clear to everyone that this isn’t the architecture of the brain. But the von Neumann architecture was just a convenient way for us poor humans to build computers in a fairly structured way that weren’t too hard to program. Brains are clearly highly parallel and distributed. However there is Turing’s completeness theorem which does say all computers are equivalent, so that means a von Neuman computer could be programmed for intelligence (if the brain is some sort of computer). But like all theoretical results, this says nothing about performance or practicality.

I recently read “Beyond Zero and One” by Andrew Smart which seems to infer that machines can never hallucinate or do LSD and hence must somehow be fundamentally different than the brain. The book doesn’t say what the brain is instead of being a computer, just that it can’t be a computer.

I don’t buy this argument. I tend to believe that machine intelligence doesn’t need to fail the same way human brains fail when damaged, but at the same time we learn an awful lot about the brain when studying it when it malfunctions. It may turn turn out that hallucinations are a major driver in creativity and that once we achieve a higher level of AI, that AIs in fact hallucinate, have dreams and exhibit the same creativity as humans. One theory is that LSD removes the filters through which we perceive the world and opens us up to greater possibilities, if this is the case, removing or changing filters is probably easier for AIs than for biological brains.

Another common argument is that the brain is more than a current digital computer, and is in fact a quantum computer of far greater complexity than we currently imagine. That in fact it isn’t chemical reactions that drive intelligence, but quantum reactions and that in fact every neuron is really a quantum computer in its own right. I don’t buy this argument at all, since the scale and speed of the brain exactly match that of the general chemical reactions we understand in biology and that the scale of the brain is much larger than the electronic circuits where we start to see quantum phenomena.

A very good book on modern Physics is “The Big Picture” by Sean Carroll. This book shreds a lot of the weird quantum brain model theories and also shows how a lot of the other more flaky theories (usually involving souls and such) are impossible under modern Physics.

The book is interesting, in that it explains very well the areas we don’t understand, but also shows how much of what happens on our scale (the Earth, Solar System, etc.) are mathematically provable to be completely understood to a very high accuracy. For instance if there is an unknown force that interacts with the brain, then we must be able to see its force carrier particle when we crash either antiprotons with protons or positrons with electrons. And since we haven’t seen these to very high energies, it means if something unknown exists then it would operate at the energy of a nuclear explosion.

Consciousness and Intelligence in Animals

I recently read “Are We Smart Enough To Know How Smart Animals Are?” by Frans de Waal. This was an excellent book highlighting how we (humans) often use our own prejudices and sense of self-importance to denigrate or deny the ability of the “lesser” animals. The book contains many examples of intelligent behaviour in animals including acts of reasoning, memory, communication and emotion.

I think the modern study of animal intelligence is showing that intelligence and self-awareness isn’t just an on/off attribute. That in fact there are levels and degrees. I think this bodes very well for machine intelligence, since it shows that many facets of intelligence can be achieved at a complexity far less than that inherent in a human brain.

Summary

I don’t recommend the book “Beyond Zero and One”, however I strongly recommend the books: “Are We Smart Enough to Know How Smart Animals Are?” and “The Big Picture”. I don’t think intelligence will turn out to be unique to humans and as we are recognizing more and more intelligence in animals, so we will start to see more and more intelligence emerging in computers. In future articles we will look at how the brain is a computer and how we are starting to copy its operations in electronic computers.

 

 

 

Written by smist08

May 16, 2017 at 7:49 pm

Posted in Artificial Intelligence

Tagged with ,