Stephen Smith's Blog

Musings on Machine Learning…

Posts Tagged ‘machine learning

Your New AI Accountant

leave a comment »

Introduction

We live in a complex world where we’ve accumulated huge amounts of knowledge. Knowing everything for a profession is getting harder and harder. We have better and better knowledge retrieval programs that let us look up information at our fingertips using natural language queries and Google like searches. But in fields like medicine and law, sifting through all the results and sorting out what is relevant and important is getting harder and harder. Especially in medicine there is a lot of bogus and misleading information that can lead to disastrous results. This is a prime application area where Artificial Intelligence and Machine Learning are starting show some real promise. We have applications like IBM’s Watson successfully diagnosing some quite rare conditions that stumped doctors. We have systems like ROSS that provide AI solutions for law firms.

How about AIs supplementing Accountants? Accountants are very busy and in demand.  All the baby boomers are retiring now, and far more Accountants are retiring than are being replaced by young people entering the profession. For many businesses getting professional business advice from Accountants is getting to be a major problem. This affects them properly meeting financial reporting requirements, legal regulatory compliance and generally having a firm complete understanding on how their business is doing. This article is going to look at how AI can help with this problem. We’ll look at the sort of things that AIs can be trained to do to help perform some of these functions. Of course you will still need an Accountant to provide human oversight and to provide a sanity check, but if things are setup correctly to start with, it will save you a lot of time and money.

Interfaces

If you have an AI with Accounting knowledge, how can it help you? In this sections we’ll look at a few ways that the AI system could interact with both the employees of the business as well as the Business Applications the business uses like their Accounting or CRM systems.

Chatbots

Chatbots are becoming more common, here you either type natural language queries to the AI, or it has a voice recognition component that you can talk to. The query processor is connected to the AI and the AI is then connected to your company’s databases as well as a wealth of professional information on the Internet. These AIs usually have multiple components for voice input, natural language processing, various business areas of expertise, and multiple ways of presenting results.

There have been some notable chatbot failures like Microsoft’s Twitter Chatbot which quickly became a racist asshole. But we are starting to see the start of some more successful implementations like Sage’s Pegg or KLM’s Messenger Bot. Plus the general purpose bots like Alexa, Siri and Allo are getting rather good. There are also some really good toolkits, like Amazon Lex, available to develop chatbots so this becomes easier for more and more developers.

In-program Advice

There have been some terrible examples of in-product advice such as the best forgotten Microsoft Clippy. But with advances in User Centered Design, much less intrusive and subtle ways of helping users have emerged. Generally these require good content so what they present is actually useful, plus they have to be unobtrusive so they never interfere with someone doing their work unless they want to pay attention to them. Then when they are used they can offer to make changes automatically, provide more information or keep things to a simple one line tip.

If these help technologies are combined with an AI engine then they can monitor what the user is doing and present application and context based help. For instance suggesting that perhaps a different G/L account should be used here for better Financial Reporting. Perhaps suggesting that the sales taxes on an invoice should be different due to some local regulations. Making suggestions on additional items that should be added to an Accounting document.

These technologies allow the system to learn from how a company uses the product and to make more useful suggestions. As well as having access to industry standards that can be incorporated to assist.

Offline Monitoring

In most larger businesses, the person using the Business Application isn’t the one that needs or can use an Accountant’s advice. Most data entry personnel have to follow corporate procedures and would get fired if they changed what they’ve been told to do, even if it’s wrong. Usually this has to be the CFO or someone senior in the Accounting department. In these cases an AI can monitor what is going on in the business and make recommendations to the right person. Perhaps seeing how G/L Accounts are being used and sending a recommendation for some changes to facilitate better Financial Reporting or regulatory compliance.

Care has to be taken to keep this functionality clear of other unpopular productivity monitoring software that does things like record people’s keystrokes to monitor when they are working and how fast. Generally this functionality has to stick to improving the business rather than be perceived as big brother snitching on everyone.

Summary

Most small business owners consider Accounting as a necessary evil that they are required to do to submit their corporate income tax. They do the minimum required and don’t pay much attention to the results. But as their company grows their Accounting data can give them great insights to how their business is running. Managing Inventory, A/R and A/P make huge differences to a company’s cash flow and profitability. Correctly and proactively handling regulatory compliance can be a huge time saver and huge cost saver in fines and lawsuits.

It used to be that sophisticated programs to handle these things required huge IT departments, millions of dollars invested in software and really was only available to large corporations. With the current advances in AI and Machine Learning, many of these sophisticated functionalities can be integrated into the Business Applications used by all small and medium sized businesses. In fact in a few years this will be a mandatory feature that users expect in all the software they use.

Advertisements

Written by smist08

July 29, 2017 at 8:42 pm

Making Business Applications Intelligent

leave a comment »

Introduction

Today Business Applications tend to be rather boring programs which present the user with rather complicated forms that need to be filled in with a lot of detail. Accuracy is tantamount and there are a lot of security measures to prevent fraud and theft. Companies need to hire large numbers of people to enter data very repetitively into these forms. With modern User Centered Design these forms have become a bit easier to work with and have progressed quite a bit since the original Business Apps on 3270 terminals connected to IBM Mainframes, but I don’t think anyone really considers these applications fun. Necessary and important yes, but still not many people’s favorite programs.

We’ve been talking a lot about the road to strong AI and we’ve looked at a number of AI tools like TensorFlow, but what about more practical applications that are possible right now? My background is working on ERP software, namely Sage 300/Accpac. In this article I’ll be looking at how we’ll be seeing machine learning/AI algorithms start to be incorporated into standard business applications. A lot of what we will talk about here will be integrated into many applications including things like CRM and Business Analytics.

Many of the ideas I talk about in this article are available today, just not all in the same place. Over the coming years I think we’ll see most of these become standard expected features in every Business Application. Just like we expect modern User Centered Design, tomorrow we will expect intelligent algorithms supporting us behind the scenes in everything we do.

Very High Level Diagram of the Main Components of an Intelligent Business Application

Some Quick Ideas

With Machine Learning and AI algorithms there could be many small improvements made to Business Applications, there could be major changes in the way things work, all the way up to automating many of the processes that we currently perform manually. Often small improvements can make a huge difference to the lives of current users and are the easiest to implement, so I don’t want to ignore these possibilities on the way to pursuing larger more ambitious longer term goals. Perhaps these AI applications aren’t as exciting as self-driving cars or real time speech translation, but they will make a huge difference to business productivity and lead to large cost savings to millions of companies. They will provide real business benefit with better accuracy, better productivity and automated business processes that lead to real cost savings and real revenue boosts.

Better Defaulting of Fields

Currently fields tend to be defaulted based on configuration screens configured by administrators. These might change based on an object like a customer or customer group, but tend to be fairly static. An algorithm could watch what a user (or all the users at a company) tend to use and make much more intelligent defaults. These could be based on various contexts of other fields, time/date, current promotions, even news feed items. If defaults are provided more intelligently, then it will save users huge time in data entry.

Better Auto-Suggestions

Currently auto-suggestions on fields tend to be based on a combination of previous values entered and performing a “Google-like” search on what has been typed so far. Like defaulting this could be greatly improved by adding more sophisticated algorithms to improve the suggestions. The real Google search already does this, but most “Google-like” searches integrated into Business Apps do not. Like defaulting, having auto-suggestions give better more intelligent recommendations will greatly improve productivity. Like Google Search uses all your previous searches, trending topics, social media feeds and many other sources, so could your Business Application.

Fraud Detection

Credit card companies already use AI to scan people’s credit card purchasing patterns as well as the patterns of people using stolen credit cards to flag when they think a credit card has been stolen or compromised. Similarly Business Applications can monitor various company procedures and expenses to detect theft (perhaps strangeness in Inventory Adjustments) or unusual payments. Here there could be regulatory restrictions on what data could be used, for instance HR data is probably protected from being incorporated in this sort of analysis. Currently theft and fraud is a huge cost to businesses and AI could help reduce it. Sometimes just knowing that tools like this are being used can act as a major deterrent.

Purchasing

Algorithms could be used to better detect when items are needed to reduce inventory levels. Further the algorithms can continuously search vendor prices looking for deals and consider whether its worth buying now at a cheaper price and incurring the inventory expense or waiting. When you regularly purchase thousands or more items, a dynamic algorithm keeping on track of things can really help.

Customer Data

When you get a new customer you need all sorts of information such as their address, phone number, contacts, etc. Perhaps an algorithm could also search the web and fill in this information automatically (perhaps this is a specific example of better defaulting). Plus the AI could scan various web source (some perhaps pay services for credit ratings and such) to suggest a good credit rating/limit for this new customer. The algorithm could also run in the background and update existing customers as this data changes, since keeping customer data up to date is a major challenge for companies with many customers. Knowing and keeping up to date with your customers is a major challenge for many companies and much of this work can be automated.

Chasing Accounts Receivables

Collecting money is always a major challenge for every company. Much of this work could be automated. Plus algorithms can watch the paying habits of customers to know if say they alway pay on the end of  the quarter, not to worry so much when they go over 30 days. But if a customer suddenly gets credit rating problems or their stock tanks or there is negative news on the company then you better get collecting. Again this is all a lot of work and algorithms can greatly reduce the manual workload and make the whole process more efficient.

Setting Prices

Setting prices is an art and a science. You need to lower prices to move slow moving items out of inventory and try to keep prices high to maximize return. You need to be aware of competitors prices and watch for these items going on sale. Algorithms can greatly help with this. Amazon is a master of this, maintaining millions of prices with AI all over their web site. Algorithms can scan the web for competitive pricing, watch inventory levels and item costs, know where we are in a quarter and how much we need to stimulate sales to meet targets. These algorithms can make all the trade offs of knowing our customer loyalty versus having to be low cost, etc. Similarly this can affect customer and volume discounts. Once you have a lot of items for sale, maintain prices is a lot of work, especially in the world of online shopping where everything is changing so dynamically. With the big guys like Amazon and Walmart using these algorithms so effectively, you need to as well to be competitive.

Summary

This article just gave a few examples of the many places we’ll be seeing AI and Machine Learning algorithms becoming integrated into all our Business Applications. The examples in this article are all possible today and in use individually by large corporations. The cost of all these technologies is coming down and we are seeing these become integrated into lower cost Business Applications for small and medium sized businesses.

As these become adopted by more and more companies, it will become a competitive necessity to adopt them or risk becoming uncompetitive in the fast paced online world. There will still be a human element to monitor and provide policies but humans can perform many of these tasks at the speed and scale that today’s world requires.

For the users of Business Applications, the addition of AI to the User Interactions, should make these applications much more pleasant to operate. Instead of gotchas there will be helpful suggestions and reminders. Instead of needed to memorize and lookup all sorts of codes, these will be usefully provided wherever necessary. I think this transition will be as big as the transition we made from text based applications to GUI applications, but in this case I think the real ROI will be much higher.

 

Written by smist08

July 26, 2017 at 2:03 am

Playing the Kaggle Two Sigma Challenge – Part 1

with 4 comments

Introduction

As I started learning about machine learning and playing with simple problems, I wasn’t really satisfied with the standard datasets everyone starts out with like MNINST. So I went to playing with stock market predictions which was fun, but there was really no metric on how well I was doing (especially since I wasn’t going to use real money). Then as I was reading various books on machine learning and AI, I often ran into references to Kaggle competitions.

kaggle

Kaggle is a company that hosts machine learning competitions. It also facilitates hosting data sets, mentoring people and promoting machine learning education and research. The competitions are very popular in the machine learning community and often have quite large cash prizes, though a lot of people just do it to get Kaggle competition badges.
Kaggle appealed to me because there were quite a few interesting data sets and you could compare how your algorithms were doing against the other people playing there. The competitions usually run for 3 or 4 months and I wanted to start one at the beginning rather than jump into the middle of one or play with the dataset of an already completed competition so I waited for the next one to start.

The next competition to start was the AllState Claims Severity challenge where you predicted how much money someone was going to cost the insurance company. There was no prize money with this one and I wasn’t really keen on the problem. However the dataset was well suited to using TensorFlow and Neural Networks, so I started on that. I only played with it for a month before abandoning it for the next competition, but my last score ended up being 1126th out of 3055 teams.

Then came the Outbrain Click prediction competition. Outbrain is that annoying company that places ads at the bottom of news articles and they wanted help better predicting what you might be interested in. This competition had $25,000 in prizes, but besides not really wanting to help Outbrain, I quickly realized that the problem was way beyond what I could work on with my little MacBook Air. So I abandoned that competition very quickly.

Next came the Santander Bank Product Recommendation competition where you predicted other banking products to recommend to customers. There was $60,000 in prize money for this one. I played with this one a bit, but didn’t really get anywhere. I think the problem was largely dealing with bad data which although important, isn’t really what I’m interested in.

twosigma

Then along came the Two Sigma Financial Modelling Challenge. This one had $100,000 in prize money and was actually a problem I’m interested in. I played this competition to completion and my plan is to detail my journey over a few blog posts. Only the top seven entries receive any prize money and since these competitions are entered by University research teams, corporate AI departments and many other people, it is very hard to actually win any money. The top 14 win a gold medal, the top 5% get a silver medal and the top 10% a bronze. 2071 teams entered this competition.

The Two Sigma Financial Modeling Challenge

One of the problems I had with the first set of challenges was that you ran your models on your own computer and then just submitted the results to the competition for evaluation. This put me at a disadvantage to people with access to much more powerful computers or access to large cloud computing infrastructure. I didn’t really want to spend money on the cloud or buy specialized hardware for a hobby. With the Two Sigma challenge this changed completely. With this challenge you run your code in a Kaggle hosted docker image and rather than submit your results, you submit your code and it has to run in the resources of the Kaggle hosted image. This then leveled the playing field for all the competitors. This restricted you to programming in Python (which I like, but many R programmers objected to) and using the standard Python machine learning libraries, but they seemed to include anything I’d ever heard of.

The provided dataset consisted of 60 or so fundamental metrics, 40 or so technical indicators and 5 derived indicators. Plus in the test set the value of what you are trying to predict. No further explanation of what anything was was given, it was up to you to make what you could of the data and predict the target variable. The data was grouped by id and timestamp so you appeared to be tracking a portfolio of about 1000 stocks or commodities through time. The stocks in the portfolio changed over time and when you were predicting values you weren’t explicitly given the previous values of what you were predicting. There was some feedback in that you predicted the results for the portfolio one timestamp at a time and received a score for each submitted group for a timestamp.

We were given about 40% of the data to train our models and do test runs, which we could either do locally or in the Kaggle cloud. Then when you submitted your code for evaluation it ran on the full dataset, training on that 40% and then predicting the rest that we never saw. You could only submit two entries a day for evaluation, so you had to make each one count. This was to stop various algorithms of simply being able to overfit the evaluation data to win the competition (ie limit cheating).

We could run against the test data all we liked in the Kaggle cloud. But there were restrictions on processing time and memory usage. When running we were allowed 20 minutes processing and 8Gig of RAM. When submitting we were allowed 1 hour or processing and 16Gig or RAM. This tended to work out due to the size difference in the data sets used. Given the size of the dataset, this meant your training and evaluation had to be very efficient to run under the given constraints. We’ll discuss the tradeoffs I had to make to run in this environment quite a bit over the coming blog articles, this especially meant you had to use pandas and numpy very efficiently and avoided Python loops at all costs.

Summary

Well I seem to have filled up this blog post without even starting the competition. In future posts I’ll cover the various algorithms that I tried. What worked and what didn’t. And the progression of results that eventually combined to give me my final score. I’ll also discuss what is revealed by the winners and how what they did differed from what I did, what I missed and what I got right. Anyway I think this going to take quite a few posts to cover everything.

 

Written by smist08

March 2, 2017 at 6:48 pm