Stephen Smith's Blog

Musings on Machine Learning…

Posts Tagged ‘learning

Learning in Brains and Computers

leave a comment »

Introduction

In the last couple of articles we were considering whether the brain is a computer and then what its operating system looks like. In this article we’ll be looking at how the brain learns and comparing that to how learning in a modern AI system works. As we noted before, our DNA doesn’t contain a lot of seed data for the brain, nearly everything we know needs to be learned. This tends to be why as animals become more and more advanced, their childhoods become longer and longer. Besides growing to our full size, we also require that time to learn what we will need to survive on our own as adults for when we leave our parents. Similarly AI systems start without any knowledge, just a seed of random data to start with, then we train them so that they can do the job we desire like driving a car. However, how we train an AI system is quite different than how we train a child, though there are similarities.

How the Brain Learns

For our purposes we are looking at what happens at the neuron level during learning rather than considering higher level theories on educational methods. As we are trained two things happen, on is when something is reinforced then the neural connections are strengthened. Similarly if a pathway isn’t used then it weakens over time. This is controlled by the number of chemical transmitters at the junction between the neural connection. The other thing that happens is the neurons grow new connections. To some degree the brain is always re-wiring itself by growing new connections. As mentioned before thousands of neurons die each day and not all of them are replaced, so as we age we have fewer neurons, but this is counterbalanced by a lifetime of learning where we have continuously grown new neural connections, so as we age perhaps we have fewer neurons, but by far more neural connections. This is partly why staying mentally active and pursuing lifetime learning is so important to maintain mental health into older age.

Interestingly this is also how memory works. This same neural strength adjustment and connection growth is how we encode memories. The system is a bit more complex since we have a short term memory system from which some data is later encoded into long term memory but the basic mechanism are the same. This is why we forget things, if we don’t access a memory then the neural connection will weaken over time and eventually the memory will be forgotten.

A further feature of biological learning is how the feedback loop works. We get information through our senses and can use that for learning, but it’s been shown that the learning is much more effective if it leads to action and then the action provides feedback. For instance if you are shown a picture of a dog and told its a dog, this is far less effective than being provided a dog that you can interact with, by touching and petting. It appears that having exploratory action attached to learning is far more effective in our we learn, especially at young ages. We say this is the input – learn – action loop with feedback rather than just the input – learn loop with feedback.

How AIs Learn

Let’s look specifically at Neural Networks, which have a lot of similarities with the brain. In this case we represent all the connections between neurons as weights in a matrix where zero represents no connection and a non-zero weight represents a connection that we can strengthen by making larger or weaken by making smaller.

To train a Neural Network we need a set of data where we know the answers. Suppose we want to train a Neural Network to recognize handwritten numbers. What we need is a large database of images of handwritten numbers along with the number each image represents. We then train the Neural Network by seeding it with random weights feed each image through the Neural Network and compare how it does to the correct answer. We have sophisticated algorithms like Stochastic Gradient Descent that adjusts the weights in the matrix to produce better results. If we do this enough then we can get very good results from our Neural Network. If often apply some other adjustments such as setting small weights to zero so they really don’t represent a connection or penalizing large weights since these lead to overfitting.

This may seem like a lot of work, and it is, but it can be done in a few hours or days on a fast modern computer, using GPUs if necessary to speed things up. This relies on that we can adjust weights instantly since they are just floating point numbers in a matrix, unlike the brain which needs to make structural changes to the neuron or brain.

A Comparison

To effectively train a Neural Network to recognize handwritten decimal digits (0-9) requires a training database of around 100,000 images. One of the reasons AI has become so successful in recent years has been the creation of many such huge databases that can be used for training.

Although it might feel like it to a parent, it doesn’t require showing a toddler 100,000 images for them to learn their basic numbers. What it does take is more time and a certain amount of repetition. Also the effectiveness is increased if the child can handle the digits (like with blocks) or draw the digits with crayons.

It does take longer to train a toddler than an AI, but this is largely because growing neural connections is a slower process than executing an algorithm on a fast computer which doesn’t have any other distractions. But the toddler will quickly become more effective at performing the task than the AI.

Comparing learning to recognize digits like this may not be accurate, since in the case of the toddler, they are first learning to distinguish objects in their visual field and then recognize objects when they are rotated and seen from separate angles. So the input into learning digits for a brain probably isn’t a set of pixels directly off the optic nerve. The brain will already have applied a number of algorithms it learned previously to present a higher level representation of the digit before being asked to identify each digit. In the same way perhaps our AI algorithm for identifying digits in isolation from pixelated images is useful for AI applications, but isn’t useful on the road to true intelligence and that perhaps we shouldn’t be using these algorithms in so much isolation. We won’t start approaching strong AI till we get many more of the systems working together. For instance for self driving cars, the system has to break a scene up into separate objects before trying to identify them, creating such a system requires several Neural Networks working together to do this work.

Is AI Learning Wrong?

It would appear that the learning algorithm used by the toddler is far superior to the learning algorithm used in the computer. The toddler learns quite quickly based on just a few examples and the quality of the result often beats the quality of a Neural Network. The algorithms used in AI like Stochastic Gradient Descent tend to be very brute force, find new values of the weights that reduce the error and then keep iterating, reducing the error till you get a good enough result. If you don’t get a good enough result then fiddle with the model and try again (we now have meta-algorithms to fiddle with the model for us as well). But is this really correct? It is certainly effective, but seems to lack elegance. It also doesn’t seem to work in as varied circumstances as biological learning works. Is there a more elegant and efficient learning algorithm that is just waiting to be discovered?

Some argue that a passive AI will never work, that the AI needs a way to manipulate its world in order to add that action feedback loop to the learning process. This could well be the case. After all we are training our AI to recognize a bunch of pixels all out of context and independently. If you add the action feedback then you can handle and manipulate a digit to see it from different angles and orientations. Doing this you get far more benefit from each individual training case rather than just relying on brute force and millions of separate samples.

Summary

There are a lot of similarities in how the brain learns versus how we train AIs, but there are also a lot of fundamental differences. AIs rely much more on brute force and volume of training data. Whereas the brain requires fewer examples but can make much more out of each example. For AI to advance we really need to be building systems of multiple Neural Networks rather than focusing so much on individual applications. We are seeing this start to take shape in applications like self-driving cars. In AIs we also need to provide a way to manipulate their environment, even if this just means manipulating the images that they are provided as training data and incorporating that manipulation into the training algorithms to make them much more effective and not so reliant on big data volume. I also think that biological brains are hiding some algorithmic tricks that we still need to learn and that these learning improvements will make progress advance in leaps and bounds.

 

Written by smist08

June 23, 2017 at 6:49 pm

Learning over the Web

with one comment

Now a days, when people need to know something, they turn to Google. When people need to learn something new, like say a new programming language or application, they turn to the Web for on-line training. But what is the best way to learn things from the Internet? Which tools work well? Which tools end up wasting a lot of time? As a blog writer and Software Architect, I spend some time wondering on the best ways to disseminate information. I’ve tried a lot of the items discussed in this posting both as a teacher and as a student. As usual these represent my own personal biases and opinions.

I think there is a lot to be said for attending real physical classroom or attending conferences. Besides the sessions there is all the networking and sharing of experiences with other attendees. Sage Summit is a great one for learning and networking here in North America. But travel is expensive and time consuming. Often attending classes is too much time commitment. So it’s nice that if you can’t turn to these there are still some good alternatives.

PowerPoint is Evil

The heading refers to a well-known Wired article: PowerPoint is Evil. Many consider PowerPoint to be the worst thing that has ever happened to education. Here is the Gettysburg Address in PowerPoint form  to emphasize what is lost in a PPT presentation.  If you are looking to learn a new topic and you Google, chances are you will find many PPT presentations on the topic. Chances are if you download and read these, you will learn very little and become frustrated. I know I’ve annoyed people by sending them PPTs from old conferences in answer to various questions. Many Universities and Colleges make a big deal on how they publish all their class PPTs on the Web. To me these aren’t very useful, and make me wonder at the value of these institutions. Never mind the horror of death by PowerPoint in a meeting or at a conference.

Khan Academy

Khan Academy is a website that is completely free to use and offers course material from all over elementary school, high school and college. There are over 3000 videos and over 300 math exercise that you can use. All the videos are made by the founder Salman Khan, and are interesting because of their simplicity. Basically you get a virtual blackboard that Salman draws on as he explains a topic. Most of the videos are under ten minutes long and using this site is quite addictive. Although I haven’t done it yet, I feel that producing Khan Academy type videos could be a very efficient way of producing quite effective training material. There are other specialized on-line training sites like Code Academy, but I tend to like Khan the best.

Videos

There are some excellent videos of training sessions and lectures on the Internet. InfoQ is always posting quite good lectures. The main problem I have with them is that they are hard to skim through. Often to get the benefit you have to watch a full hour long video. I tend to prefer written material since I can skim through it and process it quite a bit quicker. You also have to watch the quality of the video, some are quite un-watchable. In a way videos are great to watch lectures you missed, perhaps at a conference you couldn’t attend. On the other hand I find it really hard to find that un-interrupted hour in a day to watch a complete video lecture.

I’ve only created a couple of videos, I found it very time consuming, mostly because in the editing process you spend so much time watching and repeating parts. Perhaps I need more practice, but I find it can take a full day to create a decent 20 minute video. I wonder if to practice, I should start doing some of my blogs as vlogs?

Webinars

I’ve attended some really excellent webinars. I especially like them if they are interactive. If there are a small number of attendees then you can ask questions as you go along. If attendees can’t ask questions I find it isn’t nearly as engaging, if there is a large audience, often a helper can find some good questions off the chat window to interject a little interactivity. Generally a good technology to provide a near classroom experience when you can’t physically meet.  With newer Telepresence technologies, I expect webinars to get better and better and to fulfill the vision of virtual classrooms.

Often when people give webinars, they record them and then post them for people that missed the live webinar. Then you get into all the issues I mentioned in the videos section. I often find recorded webinars quite boring in comparison to the live event.

E-Books

Often you can find a number of free e-books on any given topic. For that matter you could buy a real physical book or buy an e-book version. For much learning, I still enjoy quietly reading a book, whether a physical book or an e-book on my iPad. There are always promises of more interactive books, but I still like the old fashioned passive variety. Being able to mark up and search e-books is definitely nice. I even wonder if I should one day create a book version of all my blog posts?

I like it that companies post all their instruction manuals as PDFs on the web. So once I’ve lost the manual for my phone and need to change the answering machine, I can find the manual much easier on the web than finding the printed one in my house.

Blogs

I find blogs a good way to disseminate small amounts of information. However I don’t think a blog is a good vehicle for producing a training course. Cumulatively there are a lot of blogs out there and a lot of good information is available in blogs first before it appears in other media. I know I try to push out information in my blog before other departments have a change to process and publish it. So generally I find blogs best for information at the very bleeding edge, often in quite a raw form. I’m not sure if I would get many viewers, if say I ran a ten part series on say learning SData.

Wikis

Around the office we have a joke that Wikipedia knows everything and this is pretty much true. If you need quick info on a topic, then Wikipedia or other Wiki’s can be a great source. We do all our technical documentation in Wiki format now, so we can quickly push it from our internal to external Wiki very easily. We find this is a great way to provide technical documentation without any extra overhead. Many companies have Wikis of this nature. These aren’t always the best places to learn from, but they are great for looking things up.

Reference Material

A lot of companies publish all their reference material to the Internet. Often this is in Wiki format as mentioned above. However there are many other tools for generating this. For instance for our Java APIs we insist that all source code has complete JavaDoc, and then we generate this JavaDoc and reference it form the Wiki to provide API reference documentation.

Summary

I tend to think the best way to learn, is by doing. The best way to learn is from making mistakes and you need to be doing, to make those mistakes. You can’t learn all the pitfalls of something just by reading or watching. But how do you get started? The web now offers a wonderful variety of resources that are mostly free to get started and to get learning.

Written by smist08

March 31, 2012 at 5:13 pm