Stephen Smith's Blog

Musings on Machine Learning…

Intelligence Through Emergent Behaviour – Part 1

leave a comment »

Introduction

One of the arguments against Strong AI relates to how computers can somehow break out of their programming to be creative. Or how can you program a computer to be self-aware? The argument is usually along the lines that AIs are typically programmed with a lot of linear algebra (matrix operations) to form Neural Networks, or are programmed as lots of if statements like in Random Forests. These seem like very predetermined operations and how can they ever produce anything creative or beyond what it is initially trained to do?

This article is going to look at how fairly simply defined systems can produce remarkably complex behaviours that go way beyond what would be imagined. This study started with the mathematical analysis of how physical systems can start with very simple behaviour and as more energy is added their behaviour becomes more and more complex resulting in what appears to be pure chaotic behaviour. But these studies show there is a lot of structure in that chaos and that this structure is quite stable.

The arguments used against Strong AI, also apply to the human brain which consists of billions of fairly simple elements, namely our neurons that somehow each perform a fairly simple operation, yet combined yield our very complex human behaviour. This also can be used to explain the spectrum of intelligence as you go up the evolutionary chain from fairly simple animals to mammals to primates to humans.

Taylor Couette Flow

Taylor Couette Flow is from fluid mechanics where you have an experiment of water between two cylinders. Fluid mechanics may seem far away from AI, but this is one of my favourite examples of the transition from simple to complex behaviour since it’s what I wrote my master’s thesis on long ago (plus there really is a certain inter-connectedness of all things).

Consider the outer cylinder stationary and the inner cylinder rotating.

At slow speeds the fluid close to the inner cylinder will move at the speed of that cylinder and the fluid next to the outer cylinder will be stationary. Then the fluid speed will be linear between the two to give nice simple non-turbulent flow. The motion of the fluid in this experiment is governed by the Navier-Stokes equations, which generally can’t be exactly solved, but in this case it can be shown that for slow speeds this is the solution and that this solution is unique and stable (to solve the equations you have to assume the cylinders are infinitely long to avoid end effects). Stable means that if you perturb the flow then it will return to this solution after a period of time (ie if you mix it up with a spoon, it will settle down again to this simple flow).

As you speed up the inner cylinder, at some point centrifugal force will become sufficient to cause fluid to flow outward from the inner cylinder and fluid to then flow inward to fill the gap. What is observed are called Taylor cells where the fluid forms what looks like cylinders of flow.

Again the Navier Stokes equations are solvable and we can show that now we have two new stable solutions (the second being the rotation is in the opposite direction) and that the original linear solution, although it still exists is no longer stable. We call this a bifurcation, where we vary a parameter and new solutions to the differential equations appear.

As we increase the speed of this inner cylinder, we will have further bifurcations where more and more smaller faster spinning Taylor cells appear and the previous solutions become unstable. But past a certain point the structure changes again and we start getting new phenomena, for instance waves appearing.

And as we keep progressing we get more and more complicated patterns appearing.

But an interesting property is that the overall macro-structure of these flow is stable, meaning if we stir it with a spoon, after it settles down it will appear the same at the macro level, indicating this isn’t total random chaotic behaviour but that there is a lot of high level structure to this very complicated fluid flow. It can be shown that often these stable macro-structures in fact have a fractal structure, in which case we call them strange attractors.

This behaviour is very common in differential equations and dynamical systems where you vary a parameter (in our case the speed of the inner cylinder).

If you are interested in some YouTube videos of Taylor Couette flow, have a look here or here.

What Does This Have to Do with Intelligence?

OK, this is all very interesting, but what does it have to do with intelligence? The point is that the Taylor Couette experiment is a very simple physical system that can produce amazing complexity. Brains consist of billions of very simple neurons and computers consist of billions of very simple transistor logic gates. If a simple system like Taylor Couette flow can produce such complexity then what is the potential for complexity beyond our understanding in something as complicated as the brain or computers?

In the next article we’ll look at how we see this same behaviour of complexity out of simplicity in computer programs to start see how this can lead to intelligent behaviour.

 

 

Written by smist08

May 22, 2017 at 6:03 pm

The Road to Strong AI

with one comment

Introduction

There have been a great many strides in the field of Artificial Intelligence (AI) lately, with self driving cars becoming a reality, computers now routinely beating human masters at chess and go, computers accurately recognizing speech and even providing real time translation between languages. We have digital assistants like Alexa or Siri.

This article expands on some ideas in my previous article: “Some Thoughts on Artificial Intelligence”. This article is a little over a year old and I thought I might want to expand on some of the ideas here. Mostly since I’ve been reading quite a few articles and books recently that claim this is all impossible and true machine intelligence will never happen. I think there is always a large number of people that argue that anything that hasn’t happened is impossible, after all there were a large number of people still believed human flight was impossible after the Wright brothers actually did fly and for that matter it’s amazing how many people still believe the world is flat. Covering this all in one article is too much, so I’ll start with an overview this time and then expand on some of the topics in future articles.

The Quest for Strong AI

Strong AI or Artificial General Intelligence usually refers to the goal of producing a true intelligence with consciousness, self awareness and any other cognitive functions that a human processes. This is the form of AI you typically see in Science Fiction movies. Weak AI refers to solving narrow tasks and to appear intelligent at doing them. Weak AI was what you you are typically seeing with computers playing Go or Chess, self driving cars or machine pattern recognition. For practical purposes weak AI research is proving to solve all sorts of common problems and there are a great many algorithms that contribute to making this work well.

At this point Strong AI tends to be more a topic for research, but at the same time many companies are working hard on this, but often we suspect in highly secretive labs.

Where is AI Today?

A lot of AI researchers and practitioners today consider themselves working on modules that will later be connected to build a much larger whole. Perhaps a good model for this are the current self driving cars where people are working on all sorts of individual components, like vision recognition, radar interpretation, choice of what to do next, interpreting feedback from the last action. All of these modules are then connected up to form the whole. A self driving car makes a good model of what could be accomplished this way, but note that I don’t think anyone would say a self driving car has any sort of self awareness or consciousness, even to the level of say a cat or dog.

Researchers today in strong AI are building individual components, for instance good visual pattern recognition that use algorithms very similar to how neurologists have determined the visual cortex in the brain work. Then they are putting these components together on a “bus” and getting them to work together. At this point they are developing more and more modules, but they are still really working in the weak AI world and haven’t figured out quite how to make the jump to strong AI.

The Case Against Strong AI

There have been quite a few books recently about why strong AI is impossible, usually arguing that the brain isn’t a computer, that it is something else. Let’s have a look at some of these arguments.

This argument takes a few different forms. One compares the brain to a typical von Neumann architecture computer, and I think it’s clear to everyone that this isn’t the architecture of the brain. But the von Neumann architecture was just a convenient way for us poor humans to build computers in a fairly structured way that weren’t too hard to program. Brains are clearly highly parallel and distributed. However there is Turing’s completeness theorem which does say all computers are equivalent, so that means a von Neuman computer could be programmed for intelligence (if the brain is some sort of computer). But like all theoretical results, this says nothing about performance or practicality.

I recently read “Beyond Zero and One” by Andrew Smart which seems to infer that machines can never hallucinate or do LSD and hence must somehow be fundamentally different than the brain. The book doesn’t say what the brain is instead of being a computer, just that it can’t be a computer.

I don’t buy this argument. I tend to believe that machine intelligence doesn’t need to fail the same way human brains fail when damaged, but at the same time we learn an awful lot about the brain when studying it when it malfunctions. It may turn turn out that hallucinations are a major driver in creativity and that once we achieve a higher level of AI, that AIs in fact hallucinate, have dreams and exhibit the same creativity as humans. One theory is that LSD removes the filters through which we perceive the world and opens us up to greater possibilities, if this is the case, removing or changing filters is probably easier for AIs than for biological brains.

Another common argument is that the brain is more than a current digital computer, and is in fact a quantum computer of far greater complexity than we currently imagine. That in fact it isn’t chemical reactions that drive intelligence, but quantum reactions and that in fact every neuron is really a quantum computer in its own right. I don’t buy this argument at all, since the scale and speed of the brain exactly match that of the general chemical reactions we understand in biology and that the scale of the brain is much larger than the electronic circuits where we start to see quantum phenomena.

A very good book on modern Physics is “The Big Picture” by Sean Carroll. This book shreds a lot of the weird quantum brain model theories and also shows how a lot of the other more flaky theories (usually involving souls and such) are impossible under modern Physics.

The book is interesting, in that it explains very well the areas we don’t understand, but also shows how much of what happens on our scale (the Earth, Solar System, etc.) are mathematically provable to be completely understood to a very high accuracy. For instance if there is an unknown force that interacts with the brain, then we must be able to see its force carrier particle when we crash either antiprotons with protons or positrons with electrons. And since we haven’t seen these to very high energies, it means if something unknown exists then it would operate at the energy of a nuclear explosion.

Consciousness and Intelligence in Animals

I recently read “Are We Smart Enough To Know How Smart Animals Are?” by Frans de Waal. This was an excellent book highlighting how we (humans) often use our own prejudices and sense of self-importance to denigrate or deny the ability of the “lesser” animals. The book contains many examples of intelligent behaviour in animals including acts of reasoning, memory, communication and emotion.

I think the modern study of animal intelligence is showing that intelligence and self-awareness isn’t just an on/off attribute. That in fact there are levels and degrees. I think this bodes very well for machine intelligence, since it shows that many facets of intelligence can be achieved at a complexity far less than that inherent in a human brain.

Summary

I don’t recommend the book “Beyond Zero and One”, however I strongly recommend the books: “Are We Smart Enough to Know How Smart Animals Are?” and “The Big Picture”. I don’t think intelligence will turn out to be unique to humans and as we are recognizing more and more intelligence in animals, so we will start to see more and more intelligence emerging in computers. In future articles we will look at how the brain is a computer and how we are starting to copy its operations in electronic computers.

 

 

 

Written by smist08

May 16, 2017 at 7:49 pm

Posted in Artificial Intelligence

Tagged with ,

Playing the Kaggle Two Sigma Challenge – Part 5

with one comment

Introduction

This posting will conclude my coverage of Kaggle’s Two Sigma Financial Modeling Challenge. I introduced the challenge here and then blogged on what I did in December, January and February. I had planned to write this article after the challenge is fully finished, however Kaggle is still reviewing the final entries and I’m not too sure how long that is going to take. I’m not sure if this delay is being caused by Google purchasing Kaggle or just the nature of judging a competition where code is entered rather than just uploading data. I’ll just go by what is known at this point and then if there are any further surprises I’ll post an update.

Public vs Private Leaderboards

During the competition you are given a data set which you can download or run in the Kaggle Cloud. You could train on all this data offline or if you did a test run in the cloud it would train on half the data and then test on the other half. When you submit an entry it would train on all of this data set and then test against a hidden dataset that us competitors never see.

This hidden dataset was divided into two parts, ⅓ of the data was used to calculate our public score which was revealed to us and provided our score on the leaderboard during the competition. The other ⅔ was reserved for the final evaluation at the end of the competition which was called the private leaderboard score. The private leaderboard score was secret to us, so we could only make decisions based on the public leaderboard scores.

My Final Submissions

You last step into the competition is to choose two submissions as your official entries. These will then be judged based on their private leaderboard scores (which you don’t know). If you don’t choose two entries, it will select your two highest entries. I just chose my two highest entries (based on the public leaderboard).

The Results Revealed

So after the competition closed on March 1, the private leaderboard was made public. This was quite a shock since it didn’t resemble the public leaderboard at all. Kaggle hasn’t said much about the internal mechanisms of the competition so some of the following is speculation either on my part of from the public forms. It appears that the ⅓ – ⅔ split of the data wasn’t a random sample but was a time based split and it appears the market conditions were quite different in the second two thirds than the first third. This then led to quite different scores. For myself I dropped from 71st place to 1086th place (out of 2071).

What Went Wrong

At first I didn’t know what had gone wrong. I had lots of theories but had no way to test them. A few days later they revealed the private leaderboard scores for all our submissions so I could get a better idea of what worked and what didn’t. It appears that the thing that really killed me was using Orthogonal Matching Pursuit in my ensemble of algorithms. Any submission that included this had a much worse private leaderboard score than the public leaderboard score. On the reverse side any submission that used Ridge regression did better on the private leaderboard score than the public leaderboard.

Since I chose my two best entries, they were based on the same algorithm and got the same bad private leaderboard score. I should have chosen something quite different as my second entry and with luck would have gotten a much better overall score.

There is a tendency to blame overfitting on solutions that did well on the public leaderboard but badly on the private leaderboard. But with only two submissions a day and given the size of the data, I don’t think this was the case. I think it’s more a matter of not having a good representative test set. Especially given how big a shake up there was.

What Worked

Before I knew the private leaderboard scores for all my submissions I was worried the problem was caused by either offline training or my use of reinforcement learning, but these turned out to be ok. So here is list of what worked for me:

  • Training offline was fine. It provided good results and fit the current Kaggle competition.
  • RANSAC did provide better results.
  • My reinforcement learning results gave equivalent improvements on both the public and private leaderboards.
  • Lasso and ElasticNet worked about the same for both leaderboards.
  • ExtraTreesRegressor worked about the same for both leaderboards.
  • Using current and old time series data worked about the same for both leaderboards.

My best private leaderboard submission was an ExtraTreesRegressor where it had added columns for last timestamp data on a few select columns. I then had several ensembles score well as long as they didn’t include Orthogonal Matching Pursuit.

How the Winners Won

Several people that scored in the top ten revealed their winning strategies. A key idea from the people with stock market experience was to partition the data based on an estimate of market volatility and then use a different model for each volatility range. So for instance use one model when things are calm (small deltas) and and a different model when volatile. These seemed to be a common theme. One that did quite well divided the data into three volatile ranges and then used Ridge Regression on each. Another added the volatility as a calculated column and then used offline gradient boosting to generate the model. The very top people have so far kept their solutions secret, probably waiting for the next stock market competition to come along.

Advice for New Competitors

Here is some advice I have based on my current experience:

  • Leverage the expertise on the public forum. Great code and ideas get posted here. Perhaps even wait a bit before starting, so you can get a jump start.
  • Research a bit of domain knowledge in the topic. Usually a bit of domain knowledge goes a long way.
  • Deal with outliers, either by removing or clipping them or using an algorithm like RANSAC to reduce their effect.
  • Don’t spend a lot of time fine tuning tunable parameters. The results will be different on the private leaderboard anyway so you don’t really know if this is helping or not.
  • Know a range of algorithms. Usually an ensemble wins. Know gradient boosting, it seems to always be a contender.
  • Don’t get too wrapped up in the competition, there is a lot of luck involved. Remember this is only one specific dataset (usually) and there is a large amount of data that you will be judged against that you don’t get to test against.
  • Select your two entries to win based on completely different algorithms so you have a better chance with the private leaderboard.
  • Only enter a challenge if you are actually interested in the topic. Don’t just enter for the sake of entering.

Some Suggestions for Improvement

Based on my experience with this challenge, here are some of my frustrations that I think could be eliminated by some changes to the competition.

  • Don’t make all the data anonymous. Include a one line description of each column. To me the competition would have been way better (and fairer) if we knew better what we were dealing with.
  • Given the processing restrictions on the competition, provide a decent dataset which isn’t full of missing values. I think it would have been better if reasonable values were filled in or some other system was used. Given the size of the dataset, not much could be done about these.
  • A more rectangular structure to the data would have helped processing in the limited resources and enhanced accuracy. For instance stocks that enter the portfolio still have prices from before that point and these could be included. This would have helped treating the dataset as a time series easier.
  • Including columns for the market would have be great (like the Dow30 or S&P500). Most stock models heavily follow the market so this would have been a big help.
  • Be more upfront on the rules. For instance is offline training allowed? Be explicit.
  • Provide information on how the public/private leaderboard data is split. Personally I think it should have been a random sample rather than a time based split.
  • Give the VMs a bit more oomph. Perhaps now that Google owns Kaggle these will get more power in the Google cloud. But keep them free, unlike the YouTube challenge where you get a $300 credit which is used up very quickly.

Summary

This wraps up my coverage of the Kaggle Two Sigma Financial Challenge. It was very interesting and educational participating in the challenge. I was disappointed in the final result, but that is part of the learning curve. I will enter future challenges (assuming Google keeps doing these) and hopefully can apply what I’ve learned along the way.

Written by smist08

March 13, 2017 at 7:36 pm

Posted in Artificial Intelligence

Tagged with ,

Playing the Kaggle Two Sigma Challenge – Part 4

with one comment

Introduction

The Kaggle Two Sigma Financial Modeling Challenge ran from December 1, 2016 through March 1, 2017. In previous blog posts I introduced the challenge, covered what I did in December then what I did in January. In this posting I’ll continue on with what I did in February. This consisted of refining my work from before, finding ways to refine the methods I was using and getting more done during the Kaggle VM runs.

The source code for these articles is located here. The file use2.py is the code I used to train offline. You can see how I comment/uncomment code to try different things. The file multimodelmultitime.py shows how to use these results for 3 regression models and 1 random forest model. The offline file use2.py uses the datafile train.h5 which is obtained from the Kaggle competition, I can’t redistribute this, but you can get it from Kaggle by acknowledging the terms of use.

Training Offline

Usually training was the slowest part of running these solution. It was quite hard to setup a solution with ensemble averaging when you only had time to train one algorithm. Within the Kaggle community there are a number of people that religiously rely on gradient boosting for their solutions and gradient boosting has provided the key components in previous winning solutions. Unfortunately in this competition it was very hard to get gradient boosting to converge within the runtime provided. Some of the participants took to training gradient boosting offline locally on their computers and then taking the trained model and inserting it into the source code to run in the Kaggle VM. This was quite painful since the trained model is a binary Python object. So they pickled it to a string and then output the string as an ascii representation of the hex digits that they could cut and paste into the Kaggle source code. The problem here was that the Kaggle source file is limited to 1meg in size, so it limited the size of the model they could use. However a number of people got this to work.

I thought about this and realized that for linear regression, this was much easier. In linear regression the model only requires the coefficient array which is the size of the number of variables and the intercept. So generating these and cut/pasting them into the Kaggle solution is quite easy. I was a bit worried that the final test data would have different training data, which would cause this method to fail, but in the end it turned out to be ok. A few people questioned whether this was against the rules of the competition, but no one could quote an exact rule to prevent it, just that you might need to provide the code that produced the numbers. Kaggle never gave a definitive answer to this question when asked.

Bigger Ensembles

With all this in mind, I trained my regression models offline. Some algorithms are quite slow so this opened up quite a few possibilities. I basically ran through all the regression algorithms in scikit-learn and then used a collection of them that gave the best scores individually. Scikit-learn has a lot of regression algorithms and many of them didn’t perform very well. The best results I got were for Lasso, ElasticNet (with L1 ratios bigger than 0.4) and Orthogonal Matching Pursuit. Generally I found the algorithms that eliminated a lot of variables (setting their coefficients to zero) worked the best. I was a bit surprised that Ridge regression worked quite badly for me (more on that next time). I also tried some adding some polynomial components using the scikit-learn PolynomialFeatures function, but I couldn’t find anything useful here.

I trained these models using cross-validation (ie the CV versions of the functions). Cross-validation divides the data up and does various training/testing on different folds to find the best results. To some degree this avoids overfitting and provides more robustness to bad data.

Further I ran these regressions on two views of the data, one on my last data/current data on a bunch of columns and the other on the whole dataset but just for the current time stamp. Once doing this for one regression, adding more regressions didn’t seem to slow down processing much and the overall time I was using wasn’t much. So I had enough processing time leftover to add an ExtraTreesRegressor which was trained during the runs.

It took quite a few submissions to figure out a good balance of solutions. Perhaps with more time a better optimum could have been obtained, but hard time limits are often good.

RANSAC

A number of people in the competition with more of a data background spent quite a bit of time cleaning the data which seemed quite noisy with quite a few bad outliers. I wasn’t really keen on this and really wanted my ML algorithms to do this for me. This is when I discovered the the scikit-learn functions for dealing with outliers and modeling errors. The one I found useful was RANSAC (RANdom SAmple Consensus). I thought this was quite a clever algorithm to use subsets of the data to figure out the outliers (by how far they were from various prediction) and to find a good subset of the data without outliers to train on. You pass a linear model into RANSAC to use for estimating and then you can get the coefficients out at the end to use. The downside is that running RANSAC is very slow and to get good results it would take me about 8 hours to train a single linear model.

The good news here is that using RANSAC rather than cross-validation, I improved my score quite a bit and as a result ended up in about 70th place before the competition ended. You can pass the cross-validation version of the function into RANSAC to perhaps get even better results, but I found this too slow (ie to was still running after a day or two).

Summary

This wraps up what I did in February and basically the RANSAC version of my best Ensemble is what I submitted as my final result for the competition. Next time I’ll discuss the final results of the competition and how I did on the final test dataset.

Written by smist08

March 7, 2017 at 9:25 pm

Playing the Kaggle Two Sigma Challenge – Part 3

with 4 comments

Introduction

Previously I introduced the Kaggle Two Sigma Financial Modeling Challenge which ran from December 1, 2016 to March 1, 2017. Then last time I covered what I did in December. This time I’ll cover what I worked on in January.

Update 2017/03/07: I added the source code for adding the previous value of a few columns to the data in RegressionLab4.py posted here.

Time Series Data

Usually when you predict the stock market you use a time series of previous values to predict the next value. With this challenge the data was presented in a different way, namely you were given a lot of data on a given stock at a point of time and then asked to predict the next value. Of course you don’t have to use the data exactly as given, you can manipulate it into a better format. So there is nothing stopping you reformatting the data so for a given timestamp you also have a number of historical data points, you just need to remember them and add them to the data table. Sounds easy.

Generally computers are good at this sort of thing, however for this challenge we had 20 minutes of running time and 8Gig of RAM to do it. For testing training runs there were about 800,000 rows of training data and then 500,000 of test rows. These all needed to be reformatted and historical values held in memory. Further you couldn’t just do this as an array because the stock symbols changed from timestamp to timestamp. Ie symbols were inserted and removed meaning that you had to stay indexed by the symbol to ensure you were shifting data properly. The pandas library has good support for this sort of thing, but even with pandas it tended to be expensive for processing time and memory usage.

My first attempt was why not just keep the 10 last values of everything and feed that into scikit learn algorithms to see what they liked. Basically this got nowhere since my runs were just aborted as they hit the memory limit. Next I tried adding a few time series looking columns to the data table and feeded that into ExtraTreesRegressor, this worked quite well but I couldn’t had much more data without running out of memory or slowing things down so I couldn’t use many trees in the algorithm.

From this experience, I tried just selecting a few rows and presented different and tried keeping different numbers of historical data. Experimenting I found I got the best results using 36 columns and keeping 2 timestamps of history. This wasn’t much work on my part bart took quite a few days since you only get two submissions per day to test against the larger submission set.

Strictly speaking this isn’t a time series since I don’t really have historical values of the variable I’m predicting, however it is theorized (but not confirmed) that some of the input variables include weighted averages of historical data, so it might not be that far off.

Ensemble Averaging

Ensemble averaging is the technique of taking a number of machine learning algorithms that solve a particular problem and taking the average of their results as the final result. If the algorithms are truly independent then there are theorems in probability that support this. But even if they aren’t fully independent, practical results show that this does provide surprisingly good results. Further typically most Kaggle competitions are won by some sort of weighted average of a good number of algorithms. Basically this approach averages out errors and biases that an individual algorithm might introduce.

Note that the Christmas surprise solution from the previous blog article was really an ensemble of three algorithms where the average was quite a bit better than any of the individual components.

I now suspected I had enough ways to slice the data and had tried quite a few algorithms that towards the end of January I could start combining them into ensembles to get better results. I started by combining a number of regression algorithms since these were fairly fast to train and execute (especially on a reduced set of columns). I found that the regressions that gave the best results were ones that eliminated a lot of variables and just had 8 or so non-zero coefficients. This surprised me a bit, since I would have expected better results out of Ridge regression, but didn’t seem to be able to get them.

This moved me up the leaderboard a bit, but generally through January I dropped down the leaderboard and found it a bit of a struggle to stay in the top 100.

Missing Values

I also spent a bit of time trying to get better values for the missing values. Usually with stock data you can use the pandas fillna using back or forward filling (or even interpolating). However these didn’t work so well because the data wasn’t strictly rectangular due to the stocks being added and removed. Most things I tried just used too much processing time to be practical. In fact just doing a fillna on the mean (or median) values on the training data was a pretty big time user. So I tried this offline running locally on my laptop to see if I could get anywhere. I figured if I got better results then I could try to optimize them and get it going in the Kaggle VM. But even with quite a bit of running it seemed that I didn’t get any better results this way, so I gave up on it. I suspect practically speaking the ML algorithms just ignored most of the columns with a lot of missing values anyway.

Summary

This was a quick overview of my progress in January. Next up the final month of February. One thing that was good about the two submission per day limit was that it limited the amount of work I did on the competition since it could be kind of addictive.

Written by smist08

March 6, 2017 at 5:43 pm

Playing the Kaggle Two Sigma Challenge – Part 2

with 3 comments

Introduction

Last time I introduced the Kaggle Two Sigma Challenge and this time I’ll start describing what I did at the beginning of the competition. The competition started at the beginning of December, 2016 and completed on March 1, 2017.  This blog covers what I did in December.

Update 2017/03/07: I uploaded the Python source code for the code discussed here to my Google Drive. You can access them here. The files are TensorFlow1.py for the first (wide) TensorFlow attempt, TFNarrow1.py for the second narrow TensorFlow attempt, RegressionLab1.py for my regression one with reinforcement learning and then TreeReg1.py for the Christmas surprise with reinforcement learning added.

TensorFlow

tensorflow-1-0-google-open-source

Since I spent quite a bit of time playing and blogging about predicting the stock market with TensorFlow, this is where I started. The data was all numeric, so it was quite easy to get started, no one hot encoding and really the only pre-processing was to fill in missing values with the pandas fillna function (where I just used the mean since this was easiest). I’ll talk more about these missing values later, but to get started they were easy to fill in and ignore.

I started by just feeding all the data into TensorFlow trying some simple 2, 3 and 4 level neural networks. However my results were quite bad. Either the model couldn’t converge or even if it did, the results were much worse than just submitting zeros for everything.

With all the data the model was quite large, so I thought I should simplify it a bit. The Kaggle competition has a public forum which includes people publishing public Python notebooks and early in every competition there are some very generous people that published detailed statistical analysis and visualizations of all the data. Using this I could select a small subset of data columns which had higher correlations with the results and just use these instead. This then let me run the training longer, but still didn’t produce any useful results.

At this point I decided that given the computing resource limitations of the Kaggle playgrounds, I wouldn’t be able to do a serious neural network, or perhaps doing so just wouldn’t work. I did think of doing the training on my laptop, say running overnight and then copy/pasting the weight/bias arrays into my Python code in the playground to just run. But I never pursued this.

Penalized Linear Regression

My next thought was to use linear regression since this tends to be good for extrapolation problems since it doesn’t suffer from non-linearities going wild outside of the training data. Generally regular least squares regression can suffer from overfitting, especially when there are a large number of variables and they aren’t particularly linearly independent. Also least squares regression can be thrown off by bad errant data. The general consensus from the forums was that this training set had a lot of outliers for some reason. In machine learning there are a large family of Penalized Linear Regression algorithms that all try to address these problems via one means or another. Generally they do things like start with the most correlated column and then add the next most correlated column and only keep doing this as long as they have a positive effect on the results. They also penalize large weights borrowing the technique we described here. Then there are various methods to filter out outliers or to change their effect by using different metrics than sum of squares. Two popular methods are Lasso regression that uses the taxi-cab metric (sum of difference of absolute values rather than sum of square differences) and Ridge regression which uses sum of squares regression. Then both penalize large coefficients and bring in variables one at a time. Then there is  a combined algorithm called Elastic Net Regression that uses a ratio of each and you choose the coefficient.

First Victory

Playing around with this a bit, I found the scikit-learn algorithm ElasticNetCV worked quite well for me. ElasticNetCV breaks up the training data and then run iteratively testing the value of how many variables to include to find the best result. Choosing the l1 ratio of 0.45 actually put me in the top ten of the submissions. This was a very simple submission, but I was pretty happy to get such a good result.

Reinforcement Learning

One thing that seemed a bit strange to me about the way the Kaggle Gym worked was that you submitted your results for a given time step and then got a reward for that. However you didn’t get the correct results for the previous timestep. Normally for stock market prediction you predict the next day, then get the correct results at the end of the day and predict the next day. Here you only get a reward which is the R2 score for you submission. The idea is to have an algorithm like the following diagram. But incorporating the R2 score is quite tricky.

reinforcementlearning

I spent a bit of time thinking about this and had the idea that you could sort of calculate the variance from the R2 score and then if you made an assumption about the underlying probability distribution you could then make an estimate of the mean. Then I could introduce a bias to the mean to compensate for cumulative errors as the time gets farther and farther from the training data.

Now there are quite a few problems with this, namely the variance doesn’t give you the sign of the error which is worrying. I tried a few different relationships of mean to variance and found one that improved my score quite bit. But again this was all rather ad-hoc.

Anyway, every ten timesteps I didn’t apply the bias so I could get a new bias and then used the bias on the other timesteps.

Second Victory

The competition moves fairly quickly so a week or two after my first good score, I was well down in the standings. Adding the my mean bias from the reward to my ElasticNetCV regression put me back into the top 10 again.

A Christmas Present

I went to bed on Christmas eve in 6th place on the competition leaderboard. I was pretty happy about that. When I checked in on Christmas Day I was down to 80th place on the leaderboard. As a Christmas present to all the competitors one of the then current top people above me made his solution public, which then meant lots of other folks forked his solution, submitted it and got his score.

This solution used a Random Forest algorithm ExtraTreesRegressor from scikit-learn combined with a simple mean based estimate and a simple regression on one variable. The random forest part was interesting because it let the algorithm know which were missing values so it could learn to act appropriately.

At first I was really upset about this, but when I had time I realized I could take that public solution, add my mean bias and improve upon it. I did this and got back into the top ten. So it wasn’t that bad.

Summary

Well this covered the first month of the competition, two more to go. I think getting into the top ten on the leaderboard a few times gave me the motivation to keep plugging away at the competition and finding some more innovative solutions. Next up January.

 

Written by smist08

March 3, 2017 at 11:51 pm

Playing the Kaggle Two Sigma Challenge – Part 1

with 4 comments

Introduction

As I started learning about machine learning and playing with simple problems, I wasn’t really satisfied with the standard datasets everyone starts out with like MNINST. So I went to playing with stock market predictions which was fun, but there was really no metric on how well I was doing (especially since I wasn’t going to use real money). Then as I was reading various books on machine learning and AI, I often ran into references to Kaggle competitions.

kaggle

Kaggle is a company that hosts machine learning competitions. It also facilitates hosting data sets, mentoring people and promoting machine learning education and research. The competitions are very popular in the machine learning community and often have quite large cash prizes, though a lot of people just do it to get Kaggle competition badges.
Kaggle appealed to me because there were quite a few interesting data sets and you could compare how your algorithms were doing against the other people playing there. The competitions usually run for 3 or 4 months and I wanted to start one at the beginning rather than jump into the middle of one or play with the dataset of an already completed competition so I waited for the next one to start.

The next competition to start was the AllState Claims Severity challenge where you predicted how much money someone was going to cost the insurance company. There was no prize money with this one and I wasn’t really keen on the problem. However the dataset was well suited to using TensorFlow and Neural Networks, so I started on that. I only played with it for a month before abandoning it for the next competition, but my last score ended up being 1126th out of 3055 teams.

Then came the Outbrain Click prediction competition. Outbrain is that annoying company that places ads at the bottom of news articles and they wanted help better predicting what you might be interested in. This competition had $25,000 in prizes, but besides not really wanting to help Outbrain, I quickly realized that the problem was way beyond what I could work on with my little MacBook Air. So I abandoned that competition very quickly.

Next came the Santander Bank Product Recommendation competition where you predicted other banking products to recommend to customers. There was $60,000 in prize money for this one. I played with this one a bit, but didn’t really get anywhere. I think the problem was largely dealing with bad data which although important, isn’t really what I’m interested in.

twosigma

Then along came the Two Sigma Financial Modelling Challenge. This one had $100,000 in prize money and was actually a problem I’m interested in. I played this competition to completion and my plan is to detail my journey over a few blog posts. Only the top seven entries receive any prize money and since these competitions are entered by University research teams, corporate AI departments and many other people, it is very hard to actually win any money. The top 14 win a gold medal, the top 5% get a silver medal and the top 10% a bronze. 2071 teams entered this competition.

The Two Sigma Financial Modeling Challenge

One of the problems I had with the first set of challenges was that you ran your models on your own computer and then just submitted the results to the competition for evaluation. This put me at a disadvantage to people with access to much more powerful computers or access to large cloud computing infrastructure. I didn’t really want to spend money on the cloud or buy specialized hardware for a hobby. With the Two Sigma challenge this changed completely. With this challenge you run your code in a Kaggle hosted docker image and rather than submit your results, you submit your code and it has to run in the resources of the Kaggle hosted image. This then leveled the playing field for all the competitors. This restricted you to programming in Python (which I like, but many R programmers objected to) and using the standard Python machine learning libraries, but they seemed to include anything I’d ever heard of.

The provided dataset consisted of 60 or so fundamental metrics, 40 or so technical indicators and 5 derived indicators. Plus in the test set the value of what you are trying to predict. No further explanation of what anything was was given, it was up to you to make what you could of the data and predict the target variable. The data was grouped by id and timestamp so you appeared to be tracking a portfolio of about 1000 stocks or commodities through time. The stocks in the portfolio changed over time and when you were predicting values you weren’t explicitly given the previous values of what you were predicting. There was some feedback in that you predicted the results for the portfolio one timestamp at a time and received a score for each submitted group for a timestamp.

We were given about 40% of the data to train our models and do test runs, which we could either do locally or in the Kaggle cloud. Then when you submitted your code for evaluation it ran on the full dataset, training on that 40% and then predicting the rest that we never saw. You could only submit two entries a day for evaluation, so you had to make each one count. This was to stop various algorithms of simply being able to overfit the evaluation data to win the competition (ie limit cheating).

We could run against the test data all we liked in the Kaggle cloud. But there were restrictions on processing time and memory usage. When running we were allowed 20 minutes processing and 8Gig of RAM. When submitting we were allowed 1 hour or processing and 16Gig or RAM. This tended to work out due to the size difference in the data sets used. Given the size of the dataset, this meant your training and evaluation had to be very efficient to run under the given constraints. We’ll discuss the tradeoffs I had to make to run in this environment quite a bit over the coming blog articles, this especially meant you had to use pandas and numpy very efficiently and avoided Python loops at all costs.

Summary

Well I seem to have filled up this blog post without even starting the competition. In future posts I’ll cover the various algorithms that I tried. What worked and what didn’t. And the progression of results that eventually combined to give me my final score. I’ll also discuss what is revealed by the winners and how what they did differed from what I did, what I missed and what I got right. Anyway I think this going to take quite a few posts to cover everything.

 

Written by smist08

March 2, 2017 at 6:48 pm