Stephen Smith's Blog

Musings on Machine Learning…

Archive for the ‘Business’ Category

Introducing Risc-V

with 3 comments

Introduction

Risc-V (pronounced Risc Five) is an open source hardware Instruction Set Architecture (ISA) for Reduced Instruction Set Computers (RISC) developed by UC Berkeley. The Five is because this is Berkeley’s fifth RISC ISA design. This is a fully open standard, meaning that any chip manufacturer can create CPUs that use this instruction set without having to pay royalties. Currently the lion’s share of the CPU market is dominated by two camps, one is the CISC based x86 architecture from Intel with AMD as an alternate source, the other is the ARM camp where the designs come from ARM Holdings and then chip manufacturers can license the designs with royalty agreements.

The x86 architecture dominates server, workstation and laptop computers. These are quite powerful CPUs, but at the expense of using more power. The ARM architecture dominates cell phones, tables and Single Board Computers (SBCs) like the Raspberry Pi, these are usually a bit less powerful, but use far less power and are typically much cheaper.

Why do we need a third camp? What are the advantages and what are some of the features of Risc-V? This blog article will start to explore the Risc-V architecture and why people are excited about it.

Economies of Scale

The computer hardware business is competitive. For instance Western Digital harddrives each contain an ARM CPU to manage the controller functions and handle the caching. Saving a few dollars for each drive by saving the ARM royalty is a big deal. With Risc-V, Western Digital can make or buy a specialized Risc-V processor and then save the ARM royalty, either improving their profits or making their drives more price competitive.

The difficulty with introducing a new CPU architecture is to be price competitive you have to manufacture in huge quantities or your product will be very expensive. This means for there to be inexpensive Risc-V processors on the market, there has to be some large orders and that’s why adoption by large companies like Western Digital is so important.

Another giant boost to the Risc-V world is a direct result of Trump’s trade was with China. With the US restricting trade in ARM and x86 technology to China, Chinese computer manufacturers are madly investing in Risc-V, since it is open source and trade restrictions can’t be applied. If a major Chinese cell phone manufacturer can no longer get access to the latest ARM chips, then switching to Risc-V will be attractive. This is a big risk that Trump is taking, because if the rest of the world invests in Risc-V, then it might greatly reduce Intel, AMD and ARM’s influence and leadership, having the opposite effect to what Trump wants.

The Software Chicken & Egg Problem

If you create a wonderful new CPU, no matter how good it is, you still need software. At a start you need operating systems, compilers and debuggers. Developing these can be as expensive as developing the CPU chip itself. This is where open source comes to the rescue. UC Berkeley along with many other contributors added Risc-V support to the GNU Compiler Collection (GCC) and worked with Debian Linux to produce a Risc-V version of Linux.

Another big help is the availability of open source emulator technology. You are very limited in your choices of actual Risc-V hardware right now, but you can easily set up an emulator to play with. If you’ve ever played with RetroPie, you know the open source world can emulate pretty much any computer ever made. There are several emulator environments available for Risc-V so you can get going on learning the architecture and writing software as the hardware slowly starts to emerge.

Risc-V Basics

The Risc-V architecture is modular, where you start with a core simple arithmetic unit that can load/store registers, add, subtract, perform logical operations, compare and branch. There are 32 registers labeled x0 to x31. However x0 is a dedicated zero register. There is also a program counter (PC). The hardware doesn’t specify any other functionality to the registers, the rest is by software convention, such as which register is the stack pointer, which registers are used for passing function parameters, etc. Base instructions are 32-bits, but an extension module allows for 16-bit compressed instructions and extension modules can define longer instructions. The specification supports three different address sizes: 32-bit, 64-bit and 128-bit. This is quite forward thinking as we don’t expect the largest most powerful computer in the world to exceed 64-bits until 2030 or so.

Then you start adding modules like the multiply/divide module, atomic instruction module, various floating point modules, the compressed instruction module, and quite a few others. Some of these have their specifications frozen, others are still being worked on. The goal is to allow chip manufacturers to produce silicon that exactly meets their needs and keeps power utilization to a minimum.

Getting Started

Most of the current Risc-V hardware available for DIYers are small low power/low memory microcontrollers similar to Arduinos. I’m more interested in getting a Risc-V SBC similar to a Raspberry Pi or NVidia Jetson. As a result I don’t have a physical Risc-V computer to play with, but can still learn about Risc-V and play with Risc-V Assembly language programming in an emulator environment.

I’ll list the resources I found useful and the environment I’m using. Then in future blog articles, I’ll go into more detail.

  • The Risc-V Specifications. These are the documents on the ISA. I found them readable, and they give the rationale for the decisions they took along with the reasons for a number of roads they didn’t go down. The only thing missing are practical examples.
  • The Debian Risc-V Wiki Page. There is a lot of useful information here.  A very big help was how to install the Risc-V cross compilation tools on any Debian release. I used these instructions to install the Risc-V GCC tools on my Ubuntu laptop.
  • TinyEMU, a Risc-V Emulator. There are several Risc-V emulators, this is the first one I tried and its worked fine for me so far.
  • RV8 a Risc-V Emulator. This emulator looks good, but I haven’t had time to try it out yet. They have a good Risc-V instruction set summary.
  • SiFive Hardware. SiFive have produced a number of limited run Risc-V microcontrollers. Their website has lots of useful information and their employees are major contributors to various Risc-V open source projects. They have started a Risc-V Assembly Programmers Guide.

Summary

The Risc-V architecture is very interesting. It is always nice to start with a clean slate and learn from all that has gone before it. If this ISA gains enough steam to achieve volumes where it can compete with ARM, it is going to allow very powerful low cost computers. I’m very hopeful that perhaps next year we’ll see a $25 Risc-V based Raspberry Pi 4B competitor with 4Gig RAM and an M.2 SSD slot.

Written by smist08

September 6, 2019 at 6:07 pm

Posted in Business

Tagged with , , , ,

Low Cost Linux Notebooks

leave a comment »

Introduction

Theoretically, a notebook running Linux should be inexpensive, since you don’t need a Windows license and Linux runs well without premium hardware. In reality, buying a Linux notebook tends to be expensive on premium hardware. There are companies like Purism and System76 that produce Linux only laptops but these are high-end expensive. Similarly, companies like Dell seem to charge extra if you want Linux. In this article we’ll look at some options for running Linux inexpensively. We’ll look at the tradeoffs, including privacy and security.

Used, Refurbished or Discounted Windows Notebooks

Windows Notebooks have the advantage of mass-production and competition. There are tons of companies producing Windows notebooks. You can find great deals on sale, plus there is a huge market of refurbished lease returns that offer great deals. Also, companies take returns from retailers like Amazon, make sure they are ok and then sell them at a big discount. You then need to install your favorite Linux distribution and then you are up and running. You can even set it up so you can dual boot either Linux or Windows.

If you are concerned about privacy and security, then the downside of Windows notebooks is that they run the UEFI BIOS. This BIOS has backdoors built in so the NSA, and probably other governments, can remotely take control of your computer.

All that being said, if a notebook runs Windows well, it will run Linux better. A great way to bring an old slow laptop or notebook back to life, is to wipe Windows and replace it with Linux. I’m writing this on an old HP laptop which became slower and slower running Windows 10. Now with Ubuntu Linux, it runs great. No more Windows bitrot and it has a whole new life.

Chromebooks

Even cheaper than Windows notebooks, are Chromebooks. These are notebooks designed to run Google’s ChromeOS. These notebooks are cheaper because they don’t require a Windows license and they usually don’t include a harddrive. Instead of a harddrive they have a small memory card usually 16Gig or 32Gig. Chrome OS is based on a Linux kernel, but restricts you in a few ways. You need to sign on using a Google ID, then you install Apps (basically Android apps) via the Google Play store.

Earlier versions couldn’t run regular Linux apps; however, Google has been relaxing this and now allows you to install and run many Linux apps and run a terminal window. Over time Chrome OS has been slowly morphing into full Linux. From being just a portal to Google’s web apps to being a full client operating system. However, I find Chrome OS is still too limiting and there is the issue of having to sign on with Google.

Out of the box, you can’t just install Linux on a Chromebook. The BIOS is locked to only running Chrome OS. The BIOS in Chromebooks is based on Coreboot the open source which is good, however they modified it without providing the source code, so we don’t know if they added hooks for the NSA to spy on you. The Google BIOS does provide a developer mode, this developer mode gives you a root access terminal session and allows you to install and run flavours of Linux from inside Chrome OS using a set of shell scripts called crouton. Many people prefer this method as they get both Linux and Chrome OS at the same time.

Upgrade the BIOS

If you want to boot directly into an alternate OS, you usually need to upgrade the Chromebook’s BIOS to allow this. I bought an inexpensive refurbished Dell Chromebook 11 off Amazon for $100 (CAD). There are two ways to do this, one is reversible, the other isn’t and you run the risk of bricking your device. The Dell’s BIOS is divided into two parts, one is upgradable, and can be reversed using a recovery USB stick. The other requires disassembling the notebook, removing a BIOS write protect tab and then burning the whole BIOS.

I went the reversible route. I made a recovery USB stick and upgraded the BIOS to support booting other operating systems. This isn’t perfect as you are still using Google’s unknown BIOS and you have to hit control-L everytime you boot to run your alternate operating system.

The reason people will risk replacing their whole BIOS is to get a pure version of Coreboot that hasn’t been tampered with by Google. You then have full control of your computer, no developer mode and no control-L to boot. Perhaps one day I’ll give this a try.

Once you have your BIOS updated, you can install Linux from a USB stick. I chose to install GalliumOS, which is tailored for Chromebooks. It installs a minimal Linux, since it knows Chromebooks don’t have much disk space. It also includes all the drivers needed for typical Chromebook trackpads, bluetooth and Wifi. The Gallium OS website has great information, with links to how to upgrade your BIOS and otherwise prepare and complete a successful upgrade.

Another choice is LUbuntu (Light Ubuntu), which is Ubuntu Linux optimized for low memory hardware. I didn’t like this distro as much, probably because it is so optimized for low memory, whereas I have 4GB memory, it is disk space I’m short of (only 16GB). So I didn’t really need the low memory desktop, and would have preferred LibreOffice being left out.

A great source of info on updating Chromebook BIOS’s is MrChromebox. Its interesting because they also have lots of information on how to install UEFI BIOS on a Chromebook, so you can use it as a cheap Windows notebook. You could install UEFI and then run Linux, but why would you want to? Unless you want to be helpful to the NSA and other government spy agencies.

Impressions/Summary

Sadly, running Linux on a converted Windows notebook gives the better experience. At this point, despite the privacy concerns, the UEFI BIOS works better with Linux than Coreboot. On the Chromebook, besides the nuisance of having to hit control-L every time it boots, I found some things just didn’t work well. The main problem I had was closing and opening the lid on the notebook, that Linux’s suspend function didn’t work properly. Often when I opened the lid, Linux didn’t unsuspend and I’d have to do a hard power off- power on which then resulted in a disk corruption scan.  Otherwise bluetooth, wifi and the trackpad work fine.

I also think the small memory cards are a problem. I think you’re better off booting from a regular SSD hard drive. These are inexpensive and give you way more space with better performance. I wish there was a cheap Chromebook with an M.2 interface. Or even one where the memory card isn’t glued to the motherboard and in an accessible location.

I really want an inexpensive notebook with privacy and security. The best option right now is to convert a Chromebook over to full Coreboot and then run a privacy oriented version of Linux like PureOS, but right now this is quite a DIY project.

 

Written by smist08

August 9, 2019 at 6:46 pm

Posted in Business

Tagged with , , , , , ,

Spectre Attacks on ARM Devices

with one comment

Introduction

I predicted that 2018 would be a very bad year for data breaches and security problems, and we have already started the year with the Intel x86 specific Meltdown exploit and the Spectre exploit that works on all sorts of processors and even on some JavaScript systems (like Chrome). Since my last article was on Assembler programming and most of these type exploits are created in Assembler, I thought it might be fun to look at how Spectre works and get a feel for how hackers can retrieve useful data out of what seems like nowhere. Spectre is actually a large new family of exploits so patching them all is going to take quite a bit of time, and like the older buffer overrun exploits, are going to keep reappearing.

I’ve been writing quite a bit about the Raspberry Pi recently, so is the Raspberry Pi affected by Spectre? After all it affects all Android and Apple devices based on ARM processors. The main Raspberry Pi operating system is Raspbian which is variant of Debian Linux optimized for the Pi. A recent criticism of Raspbian is that it is still 32-Bit. It turns out that running the ARM in 32-bit mode eliminates a lot of the Spectre attack scenarios. We’ll discuss why this is in the article. If you are running 64-Bit software on the Pi (like running Android) then you are susceptible. You are also susceptible to the software versions of this attack like those in JavaScript interpreters that support branch prediction (like Chromium).

The Spectre hacks work by exploiting how processor branch prediction works coupled with how data is cached. The exploits use branch prediction to access data it shouldn’t and then use the processor cache to retrieve the data for use. The original article by the security researchers is really quite good and worth a read. Its available here. It has an appendix at the back with C code for Intel processors that is quite interesting.

Branch Prediction

In our last blog post we mentioned that all the data processing assembler instructions were conditionally executed. This was because if you perform a branch instruction then the instruction pipeline needs to be cleared and restarted. This will really stall the processor. The ARM 32-bit solution was good as long as compilers are good at generating code that efficiently utilize these. Remember that most code for ARM processors is compiled using GCC and GCC is a general purpose compiler that works on all sorts of processors and its optimizations tend to be general purpose rather than processor specific. When ARM evaluated adding 64-Bit instructions, they wanted to keep the instructions 32-Bit in length, but they wanted to add a bunch of instructions as well (like integer divide), so they made the decision to eliminate the bits used for conditionally executing instructions and have a bigger opcode instead (and hence lots more instructions). I think they also considered that their conditional instructions weren’t being used as much as they should be and weren’t earning their keep. Plus they now had more transistors to play with so they could do a couple of other things instead. One is that they lengthed the instruction pipeline to be much longer than the current three instructions and the other was to implement branch prediction. Here the processor had a table of 128 branches and the route they took last time through. The processor would then execute the most commonly chosen branch assuming that once the conditional was figured out, it would very rarely need to throw away the work and start over. Generally this larger pipeline with branch prediction lead to much better performance results. So what could go wrong?

Consider the branch statement:

 

if (x < array1_size)
    y = array2[array1[x] * 256];


This looks like a good bit of C code to test if an array is in range before accessing it. If it didn’t do this check then we could get a buffer overrun vulnerability by making x larger than the array size and accessing memory beyond the array. Hackers are very good at exploiting buffer overruns. But sadly (for hackers) programmers are getting better at putting these sort of checks in (or having automated or higher level languages do it for them).

Now consider branch prediction. Suppose we execute this code hundreds of times with legitimate values of x. The processor will see the conditional is usually true and the second line is usually executed. So now branch prediction will see this and when this code is executed it will just start execution of the second line right away and work out the first line in a second execution unit at the same time. But what if we enter a large value of x? Now branch prediction will execute the second line and y will get a piece of memory it shouldn’t. But so what, eventually the conditional in the first line will be evaluated and that value of y will be discarded. Some processors will even zero it out (after all they do security review these things). So how does that help the hacker? The trick turns out to be exploiting processor caching.

Processor Caching

No matter how fast memory companies claim their super fast DDR4 memory is, it really isn’t, at least compared to CPU registers. To get a bit of extra speed out of memory access all CPUs implement some sort of memory cache where recently used parts of main memory are cached in the CPU for faster access. Often CPUs have multiple levels of cache, a super fast one, a fast one and a not quite as fast one. The trick then to getting at the incorrectly calculated value of y above is to somehow figure out how to access it from the cache. No CPU has a read from cache assembler instruction, this would cause havoc and definitely be a security problem. This is really the CPU vulnerability, that the incorrectly calculated buffer overrun y is in the cache. Hackers figured out, not how to read this value but to infer it by timing memory accesses. They could clear the cache (this is generally supported and even if it isn’t you could read lots of zeros). Then time how long it takes to read various bytes. Basically a byte in cache will read much faster than a byte from main memory and this then reveals what the value of y was. Very tricky.

Recap

So to recap, the Spectre exploit works by:

  1. Clear the cache
  2. Execute the target branch code repeatedly with correct values
  3. Execute the target with an incorrect value
  4. Loop through possible values timing the read access to find the one in cache

This can then be put in a loop to read large portions of a programs private memory.

Summary

The Spectre attack is a very serious new technique for hackers to hack into our data. This will be like buffer overruns and there won’t be one quick fix, people are going to be patching systems for a long time on this one. As more hackers understand this attack, there will be all sorts of creative offshoots that will deal further havoc.

Some of the remedies like turning off branch prediction or memory caching will cause huge performance problems. Generally the real fixes need to be in the CPUs. Beyond this, systems like JavaScript interpreters, or even systems like the .Net runtime or Java VMs could have this vulnerability in their optimization systems. These can be fixed in software, but now you require a huge number of systems to be patched and we know from experience that this will take a very long time with all sorts of bad things happening along the way.

The good news for Raspberry Pi Raspbian users, is that the ARM in the older 32-Bit mode isn’t susceptible. It is only susceptible through software uses like JavaScript. Also as hackers develop these techniques going forwards perhaps they can find a combination that works for the Raspberry, so you can never be complacent.

 

Written by smist08

January 5, 2018 at 10:42 pm

Predictions for 2018

with one comment

Introduction

As 2017 draws to a close, I see a lot of predictions articles for the new year. I’ve never done one before, so what the heck. Predictions articles are notorious for being completely wrong, so take this with a grain of salt. The main problem is that things tend to take much longer than people expect so sometimes predictions are correct, but take ten years instead of one. Then again some predictions are just completely wrong. Some predictions keep reappearing and never coming true, like Linux replacing Windows or Microsoft releasing a successful phone. I think most writers find they get a lot of readers on these articles and then no one bothers to check up on them a year later. I’ll assume this is the case and go ahead and make some predictions. Some of these will be more concrete and some will be continuing trends.

Blockchain/Bitcoin

I’m not going to make any predictions on the value of Bitcoin. The more interesting part is the blockchain algorithm behind it. This algorithm allows a method to ensure reliable transfers of money in a distributed manner. The real disruption will come when services like credit or debit cards start to be supplanted via blockchain transactions that don’t require any centralized authority. Several big companies like IBM are investing heavily in the infrastructure to support this. Right now credit and debit cards charge very high fees and many businesses are highly motivated to find an alternate solution. Blockchain offers a ray of hope to remove the transaction charge/tax that exists today on every transaction. I doubt that credit and debit cards will disappear this year, but I do predict that blockchain will start to appear in a number of business to business financial exchanges perhaps something like Walmart and their suppliers. This will be the start of a long decline for the existing credit and debit card companies unless they innovate and reduce their costs. Right now they are going the route of lobbying governments to make blockchain illegal, but like with the music industry protecting CDs they are fighting an ultimately losing battle.

AI

What we are calling Artificial Intelligence will continue to evolve and become more and more useful. We won’t reach true strong AI this year and the singularity is still a ways off, but the advances are coming quickly both on the algorithms side and the hardware to run them on. Will this be the year of the self driving car? Perhaps in small numbers. We are already seeing self driving taxis in Singapore and Phoenix. I think we are primed for this to take off big time. Some of the big cost savings will come from self driving buses, taxis and trucks. However governments still need to figure out how to alleviate the disruption to the work force this will cause. We will see more and more AI solutions rolled out in sales, inventory replenishment and scientific research. Speech, translation and handwriting recognition systems will continue to get better and better. Predictive systems that suggest movies to watch and music to listen to will get better and better. Products like Alexis and Google Home will become more widespread and their perceived intelligence will improve daily.

Privacy and Security

2017 was a very bad year for data breaches, ransomware attacks, government interference and a general trend to imposing restrictions on the Internet. 2018 will be worse. We have national security agencies like the Russians operating with immunity. We have rogue nations like North Korea launching ransomware attacks. We have the removal of Net Neutrality in the USA allowing ISPs and the government to spy on everything you do. Due to the amounts of money involved and a general lack of oversight or prosecution from governments, 2018 will set new records for data breaches, stealing of personal information, botnets and ransomware attacks.

DIY

In the early days of personal computers the Apple II and IBM PC were quite open hardware architectures with slots for expansion boards and all sorts of interface capabilities. Software was also open, interfaces were documented (either by the manufacturer or reverse engineers) and you could run any software you liked. Now hardware is all closed with no interface slots and you are often lucky to get a USB port. With many modern devices you can’t even replace the battery.

With the introduction of the $35 Raspberry Pi, suddenly DIY and home hardware projects have had a resurgence. Since the Raspberry Pi runs Linux, you can run any software you like on it (ie no regulated App store).

The Raspberry Pi won’t have a refresh until 2019, but in the meantime many companies seeing an opportunity are offering similar board with more memory and other enhancements. Int 2018 we’ll see the continuing explosion of Raspberry Pi sales and an explosion of add-ons and DIY projects. All the similar and clone products should also do well and fill some niches that the Pi has ignored.

Low Cost Computers

The Raspberry Pi showed you can make a fully useful computer for $35. Others have noticed and Microsoft has produced and ARM version of Windows. Now we are seeing small complete computers based on ARM processors being released. Right now they are a bit expensive for what you get, but for 2018 I predict we are going to start seeing fully usable computers for around $200. These will be more functional than the existing x86 based Chromebooks and Netbooks and allow you to run a choice of OS’s, including Linux, Android and Windows. I think part of what will make these more successful is that emulation software has gotten much better so you can you x86 programs on these devices now. Expect to see more RAM than a Pi and SSD drives installed. For laptops expect quite long battery life.

AR/VR

Augmented Reality and Virtual Reality have received a lot of attention recently, but I think the headsets are still too clunky and these will remain a small niche device through 2018. Popular perhaps in the odd game, not really mainstream yet.

Cloud Migration

People’s cloud migrations will continue. But due to the complexity of hybrid clouds and Infrastructure as a Service (IaaS), many are going to reconsider. Companies will rethink managing their own software installations, and just adopt Software as a Service (SaaS). Many companies will move their data centers to the cloud whether Amazon, Google, Microsoft or another. But they will find this quite complex and quite expensive over time. This will cause them to consider migrating from their purchased, installed applications to true SaaS offerings. Then they don’t have to worry about infrastructure at all. Although IaaS will continue to grow in 2018, SaaS will grow faster. Similarly at some point in a few years IaaS will reach a maximum and start to slowly decline. The exception will be specialty infrastures like those with specialized AI processors or GPUs that can perform specific high intensity jobs, but don’t require a continuous subscription.

Summary

Those are my predictions for 2018. Blockchain starting to blossom, security and privacy under greater attack, AI appearing everywhere (and not just marketing material), DIY gaining strength, dramatically lower cost computers, not much in AR/VR and cloud cycling through local data centers to IaaS to SaaS. I guess we can check back next year to see how we did.

 

Merry Christmas and Happy New Year.

 

Written by smist08

December 21, 2017 at 9:55 pm

On Net Neutrality

leave a comment »

Introduction

With Ajit Pai and the Republican led FCC removing net neutrality regulations in the USA, there is a lot of debate about what this all means and how it will affect our Internet. Another question will be whether other jurisdictions like here in Canada follow suite. The Net Neutrality regulations in the USA were introduced by Barack Obama in 2015 to combat some bad practices by Internet Service Providers (ISPs) that were also cable companies. Namely they were trying to kill off streaming services like NetFlix to preserve their monopoly on TV content via their pay by channel model. Net Neutrality put a stop to that by requiring all data over the Internet’s pipes be treated equally. Under net neutrality streaming services blossomed and thrived. Are they now too big to fail? Can new companies have any chance to succeed? Will we all be paying more for basic Internet? Let’s look at the desires of the various players and see what hope we have for the Internet.

Evil Cable Companies

The cable companies want to maintain their cable TV channel business model where they charge TV channels to be part of their packages and then charge the consumers for getting them (plus the consumer has to pay by watching commercials). With the Internet people are going around the cable companies with various streaming services. The cable company charges a flat (high) monthly charge for Internet access usually based on maximum allowable bandwidth. What the cable companies don’t like is that people are switching in droves to streaming services like NetFlix, Amazon Prime or Crave. Like the music companies fighting to save CD sales, they are fighting a losing battle, just pissing off their customers and probably accelerating their decline.

So what do the Cable companies want? They want a number of things. One is to have a mechanism to stifle new technologies and protect their existing business models. This means monitoring what people are doing and then blocking or throttling things they don’t like. Another is to try to make up revenue on the ISP side as cable subscription revenue declines. For this they want more of an App market where you buy or subscribe to bundles of apps to get your Internet access. They see what the cell phone manufacturers are doing and want a piece of that action.

The cable companies know that most people have very limited choices and that if the few big remaining cable and phone companies implement these models then consumers will have no choice but to comply.

Evil Cell Phone Companies

Like the cable companies, the phone companies want to protect their existing business models. To some degree the world has already changed on them and they no longer make all their money charging for long distance phone calls. Instead they charge rather exorbitant fees for cell phone Internet access. Often due to mergers the phone and cable companies are one and the same. So the phone companies often have the same Interests as the cable companies. Similarly to the cable companies without net neutrality, the phone companies can start to throttle services they feel compete with their own services like Skype and Facetime. They also want the power to kill any future technologies that threaten them.

Evil Internet Companies

The big Internet companies like Google and Facebook claim they promote net neutrality. But their track record isn’t great. Apple invented the App market which Google happily embraced for Android. Many feel the App market is as big a threat to the Internet as the loss of Net Neutrality. Instead of using a general Internet browser, the expectation is that you use a collection of Apps on your phone. This adds to the cost of startups since they need to produce a website and then  both a iOS and Android App for their service. Apps are heavily controlled by the Apple and Google App stores. Apps that Apple or Google don’t like are removed. This way Apple and Google have very strong control on stifling new innovative companies that they feel threatened by. Similarly companies like Facebook or Netflix that have the resources to create lots of Apps for all sorts of platforms, so they aren’t really fighting for Net Neutrality so much as ensuring their apps run on all sorts of TV set top boxes and other device platforms. They don’t mind so much paying extra fees as this all raises the cost of entry for future competitors.

Evil Government

Why is the government so keen to eliminate Net Neutrality? The main thing is control. Right now the Internet is like the wild west and they are scared they don’t have sufficient control of it. They want to promote the technologies like deep packet inspection that the ISPs are working on. They would like to be able to be a man in the middle in secure communications and monitor everything. They would love to be able to remove sites from the Internet. I think many western governments are looking jealousy at what China does with their Great Firewall and would love the same control. In the early days of the telephone the dangers of government abuse were recognized and that is why they put in laws that required search warrants from judges to tap or trace phone calls. Now the government has fought hard to not require the same oversight on Internet monitoring. They see the removal of Net Neutrality as their opening to working with a few large ISPs to gain much better monitoring and control of the Internet.

The mandate of the government is to provide some corporate oversight to avoid monopolistic abuses of power. They have failed in this by allowing such a large consolidation of ISPs to very few companies and then refusing to regulate or provide any checks and balances over this new monopoly. As long as the government is scared of the Internet and considers it in its best interest to restrict it, things don’t look good.

Pirates and Open Source to the Rescue

So far that looks like a lot of power working to control and regulate the Internet. What are some ways to combat that? Especially if there is very little competition in ISPs due to all the mergers that have taken place. Let’s look at a few alternatives.

Competition

Where there is competition, take the time to read reviews and shop around. Often you will get better service with a smaller provider. Even if it costs a bit more, factor in whether you get better privacy and more freedom in your Internet access. Really just money controls these decisions and a consumer revolt can be very powerful. Also beware of multi-year contracts that make it hard to change providers (assuming you actually have a choice).

VPN

in many countries VPNs are already illegal. That could happen in North America. But if it does it will greatly restrict people’s ability to work at home. As long as you can use a VPN you have some freedom and privacy. However note that most VPNs don’t have the bandwidth to use streaming video services and would likely be throttled if they did.

The Dark Net

Another option is the darknet and setting up Tor nodes and using the Onion browser. The problem here is that it’s too technical for most people and mostly used for criminal enterprises. But if things start to get really bad due to the loss of Net Neutrality, more development will go into these technologies and they could become more widespread.

Peer to Peer

BitTorrent has shown that a completely distributed peer to peer network is extremely hard to disrupt. Governments and major corporations have spent huge amounts of time and money trying to take down BitTorrent sites used to share movies and other digital content. Yet they have failed. It could be that the loss of Net Neutrality will drive more development into these technologies and force a lot of services to abandon the centralized server model of Internet development. After all if your service comes from millions of IP addresses all over the world then how does an ISP throttle that?

Use Browsers not Apps

If you access web sites more from Browsers than Apps then you are helping promote an open and free Internet. If there isn’t an app store involved it can help keep services available. The good thing is that the core technologies in the Mozilla and WebKit browsers is open source so creating and maintaining Browsers isn’t under the control of a small group of companies. Chromium and Firefox are both really good open source browsers that run on many operating systems and devices.

Summary

Will the loss of Net Neutrality in the USA destroy the Internet? Only time will tell. But I think we will start to see a lot of news stories (if they aren’t censored) over the coming years as the large ISPs start to flex their muscles. We saw the start of this with the throttling of streaming services that caused Net Neutrality to be enacted and we’ll see those abuses restored fairly quickly.

At least in the western world these sorts of bad government decision making can be overridden by elections, but it takes a lot of activism to make changes given the huge amounts of money the cable and phone companies donate to both political parties.

Written by smist08

December 19, 2017 at 7:22 pm

Your New AI Accountant

leave a comment »

Introduction

We live in a complex world where we’ve accumulated huge amounts of knowledge. Knowing everything for a profession is getting harder and harder. We have better and better knowledge retrieval programs that let us look up information at our fingertips using natural language queries and Google like searches. But in fields like medicine and law, sifting through all the results and sorting out what is relevant and important is getting harder and harder. Especially in medicine there is a lot of bogus and misleading information that can lead to disastrous results. This is a prime application area where Artificial Intelligence and Machine Learning are starting show some real promise. We have applications like IBM’s Watson successfully diagnosing some quite rare conditions that stumped doctors. We have systems like ROSS that provide AI solutions for law firms.

How about AIs supplementing Accountants? Accountants are very busy and in demand.  All the baby boomers are retiring now, and far more Accountants are retiring than are being replaced by young people entering the profession. For many businesses getting professional business advice from Accountants is getting to be a major problem. This affects them properly meeting financial reporting requirements, legal regulatory compliance and generally having a firm complete understanding on how their business is doing. This article is going to look at how AI can help with this problem. We’ll look at the sort of things that AIs can be trained to do to help perform some of these functions. Of course you will still need an Accountant to provide human oversight and to provide a sanity check, but if things are setup correctly to start with, it will save you a lot of time and money.

Interfaces

If you have an AI with Accounting knowledge, how can it help you? In this sections we’ll look at a few ways that the AI system could interact with both the employees of the business as well as the Business Applications the business uses like their Accounting or CRM systems.

Chatbots

Chatbots are becoming more common, here you either type natural language queries to the AI, or it has a voice recognition component that you can talk to. The query processor is connected to the AI and the AI is then connected to your company’s databases as well as a wealth of professional information on the Internet. These AIs usually have multiple components for voice input, natural language processing, various business areas of expertise, and multiple ways of presenting results.

There have been some notable chatbot failures like Microsoft’s Twitter Chatbot which quickly became a racist asshole. But we are starting to see the start of some more successful implementations like Sage’s Pegg or KLM’s Messenger Bot. Plus the general purpose bots like Alexa, Siri and Allo are getting rather good. There are also some really good toolkits, like Amazon Lex, available to develop chatbots so this becomes easier for more and more developers.

In-program Advice

There have been some terrible examples of in-product advice such as the best forgotten Microsoft Clippy. But with advances in User Centered Design, much less intrusive and subtle ways of helping users have emerged. Generally these require good content so what they present is actually useful, plus they have to be unobtrusive so they never interfere with someone doing their work unless they want to pay attention to them. Then when they are used they can offer to make changes automatically, provide more information or keep things to a simple one line tip.

If these help technologies are combined with an AI engine then they can monitor what the user is doing and present application and context based help. For instance suggesting that perhaps a different G/L account should be used here for better Financial Reporting. Perhaps suggesting that the sales taxes on an invoice should be different due to some local regulations. Making suggestions on additional items that should be added to an Accounting document.

These technologies allow the system to learn from how a company uses the product and to make more useful suggestions. As well as having access to industry standards that can be incorporated to assist.

Offline Monitoring

In most larger businesses, the person using the Business Application isn’t the one that needs or can use an Accountant’s advice. Most data entry personnel have to follow corporate procedures and would get fired if they changed what they’ve been told to do, even if it’s wrong. Usually this has to be the CFO or someone senior in the Accounting department. In these cases an AI can monitor what is going on in the business and make recommendations to the right person. Perhaps seeing how G/L Accounts are being used and sending a recommendation for some changes to facilitate better Financial Reporting or regulatory compliance.

Care has to be taken to keep this functionality clear of other unpopular productivity monitoring software that does things like record people’s keystrokes to monitor when they are working and how fast. Generally this functionality has to stick to improving the business rather than be perceived as big brother snitching on everyone.

Summary

Most small business owners consider Accounting as a necessary evil that they are required to do to submit their corporate income tax. They do the minimum required and don’t pay much attention to the results. But as their company grows their Accounting data can give them great insights to how their business is running. Managing Inventory, A/R and A/P make huge differences to a company’s cash flow and profitability. Correctly and proactively handling regulatory compliance can be a huge time saver and huge cost saver in fines and lawsuits.

It used to be that sophisticated programs to handle these things required huge IT departments, millions of dollars invested in software and really was only available to large corporations. With the current advances in AI and Machine Learning, many of these sophisticated functionalities can be integrated into the Business Applications used by all small and medium sized businesses. In fact in a few years this will be a mandatory feature that users expect in all the software they use.

Written by smist08

July 29, 2017 at 8:42 pm

Making Business Applications Intelligent

leave a comment »

Introduction

Today Business Applications tend to be rather boring programs which present the user with rather complicated forms that need to be filled in with a lot of detail. Accuracy is tantamount and there are a lot of security measures to prevent fraud and theft. Companies need to hire large numbers of people to enter data very repetitively into these forms. With modern User Centered Design these forms have become a bit easier to work with and have progressed quite a bit since the original Business Apps on 3270 terminals connected to IBM Mainframes, but I don’t think anyone really considers these applications fun. Necessary and important yes, but still not many people’s favorite programs.

We’ve been talking a lot about the road to strong AI and we’ve looked at a number of AI tools like TensorFlow, but what about more practical applications that are possible right now? My background is working on ERP software, namely Sage 300/Accpac. In this article I’ll be looking at how we’ll be seeing machine learning/AI algorithms start to be incorporated into standard business applications. A lot of what we will talk about here will be integrated into many applications including things like CRM and Business Analytics.

Many of the ideas I talk about in this article are available today, just not all in the same place. Over the coming years I think we’ll see most of these become standard expected features in every Business Application. Just like we expect modern User Centered Design, tomorrow we will expect intelligent algorithms supporting us behind the scenes in everything we do.

Very High Level Diagram of the Main Components of an Intelligent Business Application

Some Quick Ideas

With Machine Learning and AI algorithms there could be many small improvements made to Business Applications, there could be major changes in the way things work, all the way up to automating many of the processes that we currently perform manually. Often small improvements can make a huge difference to the lives of current users and are the easiest to implement, so I don’t want to ignore these possibilities on the way to pursuing larger more ambitious longer term goals. Perhaps these AI applications aren’t as exciting as self-driving cars or real time speech translation, but they will make a huge difference to business productivity and lead to large cost savings to millions of companies. They will provide real business benefit with better accuracy, better productivity and automated business processes that lead to real cost savings and real revenue boosts.

Better Defaulting of Fields

Currently fields tend to be defaulted based on configuration screens configured by administrators. These might change based on an object like a customer or customer group, but tend to be fairly static. An algorithm could watch what a user (or all the users at a company) tend to use and make much more intelligent defaults. These could be based on various contexts of other fields, time/date, current promotions, even news feed items. If defaults are provided more intelligently, then it will save users huge time in data entry.

Better Auto-Suggestions

Currently auto-suggestions on fields tend to be based on a combination of previous values entered and performing a “Google-like” search on what has been typed so far. Like defaulting this could be greatly improved by adding more sophisticated algorithms to improve the suggestions. The real Google search already does this, but most “Google-like” searches integrated into Business Apps do not. Like defaulting, having auto-suggestions give better more intelligent recommendations will greatly improve productivity. Like Google Search uses all your previous searches, trending topics, social media feeds and many other sources, so could your Business Application.

Fraud Detection

Credit card companies already use AI to scan people’s credit card purchasing patterns as well as the patterns of people using stolen credit cards to flag when they think a credit card has been stolen or compromised. Similarly Business Applications can monitor various company procedures and expenses to detect theft (perhaps strangeness in Inventory Adjustments) or unusual payments. Here there could be regulatory restrictions on what data could be used, for instance HR data is probably protected from being incorporated in this sort of analysis. Currently theft and fraud is a huge cost to businesses and AI could help reduce it. Sometimes just knowing that tools like this are being used can act as a major deterrent.

Purchasing

Algorithms could be used to better detect when items are needed to reduce inventory levels. Further the algorithms can continuously search vendor prices looking for deals and consider whether its worth buying now at a cheaper price and incurring the inventory expense or waiting. When you regularly purchase thousands or more items, a dynamic algorithm keeping on track of things can really help.

Customer Data

When you get a new customer you need all sorts of information such as their address, phone number, contacts, etc. Perhaps an algorithm could also search the web and fill in this information automatically (perhaps this is a specific example of better defaulting). Plus the AI could scan various web source (some perhaps pay services for credit ratings and such) to suggest a good credit rating/limit for this new customer. The algorithm could also run in the background and update existing customers as this data changes, since keeping customer data up to date is a major challenge for companies with many customers. Knowing and keeping up to date with your customers is a major challenge for many companies and much of this work can be automated.

Chasing Accounts Receivables

Collecting money is always a major challenge for every company. Much of this work could be automated. Plus algorithms can watch the paying habits of customers to know if say they alway pay on the end of  the quarter, not to worry so much when they go over 30 days. But if a customer suddenly gets credit rating problems or their stock tanks or there is negative news on the company then you better get collecting. Again this is all a lot of work and algorithms can greatly reduce the manual workload and make the whole process more efficient.

Setting Prices

Setting prices is an art and a science. You need to lower prices to move slow moving items out of inventory and try to keep prices high to maximize return. You need to be aware of competitors prices and watch for these items going on sale. Algorithms can greatly help with this. Amazon is a master of this, maintaining millions of prices with AI all over their web site. Algorithms can scan the web for competitive pricing, watch inventory levels and item costs, know where we are in a quarter and how much we need to stimulate sales to meet targets. These algorithms can make all the trade offs of knowing our customer loyalty versus having to be low cost, etc. Similarly this can affect customer and volume discounts. Once you have a lot of items for sale, maintain prices is a lot of work, especially in the world of online shopping where everything is changing so dynamically. With the big guys like Amazon and Walmart using these algorithms so effectively, you need to as well to be competitive.

Summary

This article just gave a few examples of the many places we’ll be seeing AI and Machine Learning algorithms becoming integrated into all our Business Applications. The examples in this article are all possible today and in use individually by large corporations. The cost of all these technologies is coming down and we are seeing these become integrated into lower cost Business Applications for small and medium sized businesses.

As these become adopted by more and more companies, it will become a competitive necessity to adopt them or risk becoming uncompetitive in the fast paced online world. There will still be a human element to monitor and provide policies but humans can perform many of these tasks at the speed and scale that today’s world requires.

For the users of Business Applications, the addition of AI to the User Interactions, should make these applications much more pleasant to operate. Instead of gotchas there will be helpful suggestions and reminders. Instead of needed to memorize and lookup all sorts of codes, these will be usefully provided wherever necessary. I think this transition will be as big as the transition we made from text based applications to GUI applications, but in this case I think the real ROI will be much higher.

 

Written by smist08

July 26, 2017 at 2:03 am