Stephen Smith's Blog

Musings on Machine Learning…

Posts Tagged ‘Google

Wearable Devices and Sports

with 2 comments

Introduction

I happen to be on holiday this week on the Sunshine Coast (in BC, not Aus). I’ve doing a lot of running and cycling so I thought I would blog a bit on how new devices like GPS watches, step counters and Phone Apps are helping track sports. I have a Garmin GPS watch and an iPhone 4s. So what can I do with these and what is the potential as these devices improve?

The GCC

This year Sage is again participating in the GCC (the Get the World Moving Coporate Challenge). Basically you form teams of 7 co-workers and each of you wears a pedometer for the duration of the event. You then enter your steps, meters swam and km cycled into the website each day.

You then are tracked as you walk around the world and compete with other teams, either generally, within your company or within your area. The website is quite good, provides lots of useful information and tips on how to improve your health and fitness levels.

gcc

To do this tracking just requires a pedometer and their website. No other high tech gadgetry required. It will be interesting to see if more low tech solutions like this one (though the web site and pedometer are both fairly sophisticated) or solutions requiring more hardware like smart watches and extra devices will become the norm.

Garmin GPS Watches

There has been a lot of talk about Apple, Google and Microsoft coming out with smart watches this fall. Further several manufacturers like Samsung already have devices on the market. Then there have been a number of failures like Nike’s entry in this field. I think a lot of these companies have been looking at the success Garmin has had here. Garmin has transformed itself from manufacturing standalone GPS’s (which have now largely been replaced by functionality built into every phone) to making quite useful GPS sports watches.

The watches tend to be a bit bigger than a normal watch but still not uncomfortable to wear when running. Perhaps not the greatest fashion accessory, they are really quite useful. Besides recording your speed, location, distance and elevation in great detail, they also have heart rate monitors to give you quite a bit of information. Then they have a web site that is no extra charge to store and share all your routes and runs. For instance here.

garminconnn

The info is collected by the watch and then you upload it to your PC when you get back and then from there to their web site.

Generally this then gives you all sorts of metrics where you can see how you did, your pace every kilometer, how you did on uphill’s and downhill’s, etc. You can then track your progress and have a good idea of how you are doing.

Runtastic

There are quite a few fitness tracking apps for the iPhone. I just chose Runtastic because I liked the dashboard display in their app store ad. But otherwise it’s been fine, except for a bit too much promotion for the pro version. There are ads in the app and web site, but at a reasonable level, I think for the service you are getting.

There are a lot of attachments available to mount your phone on your bike; however, I’ve found that doing this really drains your phone’s battery quickly (i.e. in about an hour) and so isn’t really all that practical.

Runtastic Road Bike

More typically it’s better to leave the display off since then it doesn’t seem to use that much battery. Also if you stop to take pictures or something, make sure you switch back to the app first. The iPhone doesn’t have good multi-tasking, so unless the app is the active one, it probably won’t be doing anything.

Once you are finished your ride, the app uploads the data to the website and allows you to share what you are doing via social media (as any of my Facebook friends know). For instance this one here.

runtastic2

Like the Garmin website, this one gives you lots of information and makes it easy to track your progress as you try to improve your sport.

The Future

I think that companies like Apple are looking at this market and hope it is a bit like the early MP3 music player market was. Then a company like Apple could come along, redefine the market, make it dead simple and create a much larger market than what the early technology startups could achieve.

Whether Apple can repeat the iPod success in this market is yet to be seen. And they are certainly going to face a lot of competition as Microsoft and Google are hoping they can do the same thing.

Garmin type devices have better battery life and better durability than phones. However phones have better apps and greatly benefit from continuous Internet connectivity. So what are successful future devices going to need? From my perspective they will need:

  • Better battery life. Operating with only a couple of hour’s batter life is insufficient. This should really be a week.
  • Better durability. They can’t just fry when they get a little wet in the rain. Cycling and running are outdoor sports performed in any weather. Athletes don’t want extra clothing or gear to keep their watch dry. Further it would be great if these work for swimming. After all there are already a great many regularly triathlon watches that work great while swimming.
  • Intelligent support for more sports. Useful metrics gathered while golfing for instance. What about soccer, football or hockey?
  • Do not require a separate data plan. If you have to pay $50 per month to a cell phone provider then they are dead in the water.

Another area where there is great research going on is developing more sensors that measure things like blood glucose levels, blood pressure, etc. It will be interesting to see how tracking these additional metrics can help athletes.

There are also appearing apps that intelligently use the Phone’s camera to do things like analyze golf swings and tennis strokes. As these improve we may reach the stage where casual players can get real professional coaching and feedback right from their phone.

On the flip side, there is a lot of concern about the possible privacy implications of these devices. For instance if I record heart rate monitor information and it starts detecting abnormal behavior, could an insurance company find out and cancel my insurance? Could it be used in other adverse ways? Generally this sort of medical information is very protected. Will these devices, services and web sites offer the necessary levels of personal privacy protection? Will I find out I have a heart condition because suddenly I start receiving ads for defibulators and pace-makers? There is certainly a lot of concern about this out there and there have been many Science Fiction stories about the possible abuses. Hopefully these won’t all turn out to be prophetic.

Summary

By Christmas shopping season we are going to be inundated by new intelligent watches and other form factors that can help us track and improve our fitness levels. They will track all sorts of metrics for us, provide feedback and even professional levels of coaching. It will be interesting to see if this sparks a greater level of interest in fitness and sports. Maybe these will even help with the current epidemic obesity levels in our society.

Advertisements

Written by smist08

July 5, 2014 at 4:17 pm

Google Mobile Trends

with 2 comments

Introduction

In a previous blog I talked about Apple’s mobile directions following their annual developer’s conference. This past week Google help their annual I/O developer’s conference in San Francisco. So it seems like a good time to comment on Google’s mobile trends. There is a lot of similarity between Apple and Google’s directions. But there are also differences since each company has a slightly different view on what direction to take. Both companies are very large and have their hands in a lot of pies. As a consequence their announcements tend to be quite diverse and it’s interesting to see how they try to unify them all.

New Android

Like Apple announced a new version of iOS, so too Google announced a new version of Android namely “L”. I don’t know what happened to the cute code names like Kit Kat or Ice Cream Sandwich, but “L” it is. One big feature they’ve added is 64 Bit support. Apple recently introduced 64 Bit processors in their latest phones and now Google devices can catch up. This means that most new higher end phones are now all going to sport 64 Bit quad core processors. This is an amazing amount of computing power in such tiny devices.

As with most new operating systems these days, it includes a new UI look. With the new Android this new update is called Material Design and follows the fashion of a flatter more austere look to things.

There are many other new features in the new version of Android which will hopefully make people more productive.

Wearable’s Everywhere

A big trend in rumored and announced, but generally not shipping products are wearable’s. Google leads the way in these with Google Glasses. Then there are the endless smart watch announcements, rumors and even the odd product.

smartwatch

I have a Garmin GPS Watch with a heart rate monitor. Combined with the website where I upload the data, this is a great device to track my cycling and running. It would be nice if its battery lasted longer and if it was more waterproof so I could swim with it. In the meantime there are many great apps to do the same sort of things with your phone. These all do more than the Garmin watch, but the phone is bulkier and less durable in damp environments.

Having a waterproof watch that can do more than my Garmin and has a longer battery life would be fantastic.

Although in the early stages, both Google and Apple see the fitness market for metrics and tracking as a huge potential market. Both companies are both partnering and developing new sensors to measure new things, like small waterproof sensors for swimming or unique ways to measure other sports like for golf swings. Similarly measuring all sorts of biometric data beyond heart rate to include blood pressure, blood glucose levels and all sorts of other things. Eventually these will morph into a Star Trek like medical tricoder.

Home Automation

Google has now purchased a number of home automation companies including Nest. And are now integrating these with Android to provide full control of everything in your home from your phone. Including remotely setting the thermostat, receiving smoke detector alerts and monitoring security cameras. Most of these things are available now as separate discrete components but Google is working especially hard to make this whole area much more unified.

nest_thermostat_insteon-800x420

Online Cars

Another big area of interest is integrating into cars. Already most cars can interface to iPhones and Android phones to make calls hands free and play music. Now the goal is to sell Android (and iOS) into the auto industry. To have better more connected GPS (with real time traffic updates) and access to all your music library. Further many car companies are enabling using your car as a Wi-Fi hotspot.

I’m not sure how far this should go, since it all gets very distracting. Already we have so many potential distractions in cars. And just things like texting are causing many accidents.

Everything in the Cloud

With all Google’s products, the emphasis is storing all data in the cloud. They will only store things on local devices if there is a huge outcry of people that need to work offline (like on airplanes). Chromebooks really showed that this was possible and Google has led the way in offering lots of free cloud storage and making sure everything they do will interact seamlessly with these cloud documents.

They tout the convenience of this that things are always backed up, so if your laptop is stolen or destroyed you don’t lose anything. However critics worry about privacy concerns with storing sensitive data under someone else’s control, especially a search provider. It’s rather scary to corporate compliance officers that sensitive corporate documents might start showing up in people’s search results. Often this wouldn’t be due to Google doing something maliciously as someone just misclicking the visibility of the document to allow it to be viewed by anyone, and by anyone this means anyone on the Internet (and not just anyone that finds it on the corporate network).

All that being said, Google, Apple and Microsoft are all pushing this model like mad, and a lot of innovations that are in the pipeline completely rely on the adoption of this philosophy. It certainly is convenient to have all your photos and videos automatically uploaded and not to have to worry about sync’ing things by hand anymore.

Big Data

Google really started the big data revolution when they published the details of their Map Reduce algorithm that the original Google search was built upon. This then spawned a huge industry of open source tools and databases all built around this algorithm. Map Reduce was revolutionary since it let people get instant search results on searching for everything. Basically it worked by having a database of everything people might search for, like a giant cache that could return results instantly.

The limitation of Map Reduce is that constructing queries is quite difficult and often requires the database to be rebuilt in a different way. If you don’t do that, although the main query the database solves returns instantly, any other query takes a week to process.

Google is now claiming Map Reduce and all the industry like Hadoop based on it are completely obsolete. They were heavily promoting their new Cloud Dataflow service where the claim this service can also do efficient real time analytics as well as preserve the performance of the main functionality.

It will be interesting to see what this new service can really do and will it really threaten all the various NoSQL databases like MongoDB.

Summary

There are a lot of interesting things going on in the mobile world. It will be interesting to see if all our phones are replaced by watches or glasses in a couple of years. It will be interesting to see what great things come of all these new cloud big data services.

Written by smist08

June 28, 2014 at 5:28 pm

Elastic Search

with 5 comments

Introduction

We’ve been working on an interesting POC recently that involved Google like search. After evaluating a few alternatives we chose Elastic Search. Search is an interesting topic, often associated with Big Data, NoSQL and all sorts of other interesting technologies. In this article I’m going to look at a few interesting aspects of Elastic Search and how you might use it.

elasticsearchlogo

Elastic Search is an open source search product based on Apache Lucene. It’s all written in Java and installing it is just a matter of copying it to a directory and then as long as you already have Java installed, it’s ready to go. An alternative to Elastic Search is Apache Solr which is also based on Lucene. Both are quite good, but we preferred Elastic Search since it seemed to have added quite a bit of functionality beyond what Solr offers.

Elastic Search

Elastic search is basically a way of searching JSON documents. It has a RESTful API and you use this to add JSON documents to be indexed and searched. Sounds simple, but how is this helpful for business applications? There is a plugin that allows you to connect to databases via JDBC and to setup these to be imported and indexed on a regular schedule. This allows you to perform Google like searches on your database, only it isn’t really searching your database, its searching the data it extracted from your database.

Web Searches

This is similar to how web search services like Google works. Google dispatches thousands of web crawlers that spend their time crawling around the Internet and adding anything they find to Google’s search database. This is the real search process. When you do a search from Google’s web site it really just does a read on its database (it actually breaks up what you are searching for and does a bunch of reads). The bottom line though is that when you search, it is really just doing a direct read on its database and this is why it’s so fast. You can see why this is Big Data since this database contains the results for every possible search query.

This is quite different than a relational database where you search with SQL and the search goes out and rifles through the database to get the results. In a SQL database putting the data into the database is quite fast (sometimes) and then reading or fetching it can be quite slow. In NoSQL or BigData type databases much more time goes into adding the data, so that certain predefined queries can retrieve what they need instantly. This means the database has to be designed ahead of time to optimize these queries and then often inserting the data takes much longer (often because this is where the searching really happens).

Scale Out

Elastic Search is designed to scale out from the beginning, it automatically does most of the work for starting and creating clusters. It makes adding nodes with processing and databases really easy so you can easily expand Elastic Search to handle your growing needs. This is why you find Elastic Search as the engine behind so many large Internet sites like GitHub, StumbleUpon and Stack Overflow. Certainly a big part of Elastic Search’s popularity is how easy it is to deploy, scale out, monitor and maintain. Certainly much easier than deploying something based on Hadoop.

elasticsearch_topologies

Analyzers

When you index your data its fed through a set of analyzers which do things like convert everything to lower case, split up sentences to lower case, split words into roots (walking -> walk, Steve’s -> Steve), deal with special characters, dealing with other language peculiarities, etc. Elastic Search has a large set of configurable analyzers so you can tune your search results based on knowledge of what you are searching.

Fuzzy Search

One of the coolest features is fuzzy search, in this case you might not know exactly what you are searching for or you might spell it wrong and then Elastic Search magically finds the correct values. When ranking the values, Elastic Search uses something called Levenshtein distance to rank which values give the best results. Then the real trick is how does ElasticSearch do this without going through the entire database computing and ranking this distance for everything? The answer is having some sophisticated transformers on what you entered to limit the number of reads it needs to do to find matching terms, combined with good analyzers above, this turns out to be extremely effective and very performant.

Real Time

Notice that since these search engines don’t search the data directly they won’t be real time. The search database is only updated infrequently, so if data is being rapidly added to the real database, it won’t show up in these type of searches until the next update processes them. Often these synchronization updates are only performed once per day. You can tune these to be more frequent and you can write code to insert critical data directly into the search database, but generally it’s easier to just let it be updated on a daily cycle.

Security

When searching enterprise databases there has to be some care on applying security, ensuring the user has the rights to search through whatever they are searching through. There has to be some controls added so that the enterprise API can only search indexes that the user has the rights to see. Even if the search results don’t display anything sensitive they could still leak information. For instance if you don’t have rights to see a customer’s orders, but can search on them, then you could probably figure out how many orders are done by each customer which could be quite useful information.

Certainly when returning search results you wouldn’t reveal anything sensitive like say salaries, to get these you would need to click a link to drill down into  the applications UI where full security screen is done.

Summary

Elastic Search is a very powerful search technology that is quite easy to deploy and quite easy to integrate into existing systems. A lot of this is due to a powerful RESTful API to perform operations.

 

The Umbrella Ceiling

with 4 comments

Introduction

My wife, Cathalynn, and I were recently discussing issues with people moving to other cities to pursue their careers and the hard decisions that were involved in doing this. My nephew, Ian Smith, is just starting his career and when choosing where to work has to consider what it takes to grow in the role he eventually accepts. When I started at Computer Associates, if you wanted to move up in the organization past a certain point, then you had to move to the company headquarters in New York. Similarly, when Cathalynn was working at Motorola, the upwardly mobile had to relocate to Schaumburg, Illinois.

From Cathalynn Labonté-Smith

Recently, Vancouver hosted a Heritage Classic hockey game at BC Place as have many cities across Canada. An outdoor rink facsimile was made inside an indoor venue to recreate a 1915 game complete with original uniforms and “snow”. The plan was to retract the ceiling on the dome but a torrential downpour kept the giant umbrella deployed. Despite the nostalgia of the game the Vancouver Canucks and Ottawa Senators were playing for real—this game counted for NHL points, so the integrity of the ice had to be maintained.

bcplace1000_120513

We’ve all heard of the glass ceiling. Indeed, yesterday (March 8th) it was International Women’s Day—a day to reflect on all aspects of women’s’ equality and well-being. In the corporate world, how are we doing? According to Catalyst only 4.6% of Fortune 1000 companies have women CEOs (http://www.catalyst.org/knowledge/women-ceos-fortune-1000).

We’ve all heard of hitting the glass ceiling, however; living on the West Coast working in the high technology sector we have what I call an umbrella ceiling that applies to both genders. Umbrella in the down or sun position–you are blessed with a lifestyle that promotes health and well-being with a year-round outdoor playground and cultural diversity. Umbrella in the up or rain position—you are blocked from moving on to a top job within any corporation that has a head office outside of British Columbia you have to leave. We’ve been to many a tearful going away party. But then if you stay as the Smiths have where are roots and family are, you many spend your weekends hiking, snowboarding, cycling, gardening, wine-tasting, cross-border shopping to Seattle and in many other wonderful pursuits, so that’s cool too.

Does it have to continue to be this way? With all the technology like Skype, other teleconferencing software, cloud applications, mobile phones, portals, access to travel and other collaborative tools that are available why do corporations still tend to centralize top officers in one location? Or, can companies truly embrace the mobile workforce including more females at the CEO level. Are they missing out on or losing top talent for this-is-the-way-we’ve-always-done-itism?

I’m turning this over to the expert, Mr. Steve himself. Cat out.

umbrellaceiling

Physical versus Virtual Offices

A lot of discussion comes down to how important is face-to-face interaction. How much can be done virtually via Skype, e-mail, telepresence, chat and other collaborative technologies?

My own experience is that there are a lot of communication problems that can easily be cleared up face-to-face. Often without direct interaction, misunderstandings multiply and don’t get resolved. Probably the worst for this is e-mail. Generally, programmers don’t like to talk on the phone and so will persist with e-mail threads that lead nowhere for far too long rather than just picking up the phone and resolving the issue.

But with video calls so routine can much be handled this way instead and physical meetings kept to a minimum? Another thing that limits interactions is living in different time zones and how much time you have to interact. For example, I have days bookended by early morning and late evening conference calls.

Generally, office design has improved over the years as well to better facilitate team work and collaboration. If you aren’t in this environment are you as productive as the people that are?

Tim Bray leaves Google to stay in Vancouver

A recent high profile case of this was Tim Bray who worked at Google but lives in Vancouver. He gave a quick synopsis on his blog here. Google has a reputation as a modern web cloud company, and yet here is a case where having someone physically present is the most important qualification for the job. If Google can’t solve this problem, does anyone else have a chance?

Though personally it seems that Tim accepted the position at Google with the assumption of moving to California, so it seems a bit passive aggressive, then staying in Vancouver and just pretending he would move.

Mobility of CEOs

The ultimate metric of all this is how mobile is the CEO of a company. Does the CEO have to physically be present in the corporate headquarters for a significant percentage of their time? Does the CEO have to have a residence in the same city as the corporate headquarters? Is even the idea of a physical corporate headquarters relevant anymore in today’s world?

Many top executives spend an awful lot of their time on airplanes and in hotels. To some degree does it really matter where they live? After for modern global companies often to have the necessary face to face time with all the right people can’t be done from the corner office. Is the life of an executive similar to the life of George Clooney in Up in the Air?

I think if the CEO is in a fixed location then the upwardly mobile are going to be attracted to that location like moths to a flame. I think there is a strong fear in people of being out of the loop and for executives this can be quite career limiting.

Summary

I tend to think that face-to-face interaction and working together physically as a team has a lot of merit. Just breaking down the barriers to communications in this sort of tight knit environment can still be challenging.

I find that working remotely works very well for some people. But these people have to be strongly self-motivated and have to be able to work without nearly as much direct supervision or oversight.

I’m finding that the tools for communicating remotely are getting better and better and that this does then allow more people to work remotely, but at this point anyway, we can’t go 100% down this road.

If you have any thoughts on this, leave a comment at the end of the article.

umbrellaceiling2

Google Forks WebKit

with one comment

Introduction

WebKit is the underlying HTML rendering library used primarily by the Apple Safari and Google Chrome browsers. It is used in a lot of other projects like the Blackberry Browser, Opera, Tizen, Kindle and even some Microsoft e-mail clients. Even Nokia was a big WebKit user before switching to Windows Phone. Generally it’s been considered a great success, rallying the web around standards and making life easier for web developers.

webkit

WebKit is a solid open source project with lots of support. This is one reason it’s so successful. Currently in Internet browsers there are three main HTML rendering engines: the Internet Explorer Trident engine, the Mozilla Firefox Gecko engine and then WebKit.

The big news around the Internet on this front recently is that Google is forking WebKit (meaning starting a new open source project based on WebKit) and then taking it in its own direction with a project called Blink. This raises all sorts of questions: like what it means to web developers? What is Google’s real agenda? Will this damage web standards? Will this slow WebKit development? In this blog posting I want to give my perspective on a few of these questions.

History

Actually WebKit was started out of a similar controversy. Back in 2001, Apple forked the KHTML/KJS HTML rendering engine used by the browser that is part of the KDE Linux User Interface system. Basically Apple wanted something better and more tuned for its OS X project. The result was the Safari browser built on the first WebKit HTML engine. At the time no one in the Linux community was happy about this, but in the end looking back, success makes everything all right.

So now that Google is forking WebKit, claiming that it’s for the same reasons that Apple forked KHTML, will history repeat itself and a much better HTML rendering engine will emerge? Or will this just fragment the market into more slightly different HTML rendering engines making life more difficult for web developers?

WebKit’s Mobile Success

In recent years, now that Android and the iPhone have completely taken over the mobile phone market, developing web sites with HTML5 and JavaScript has become much easier. This is because WebKit is used in both of these families of devices. This means to cover 95% of the mobile market you just need to target WebKit. This greatly simplifies development and testing. Further WebKit follows web standards diligently, it keeps up with evolving standards, has great performance and great quality.

I think that WebKit has been a major contributor to the combined success of both Android and the iPhone. You can easily browse most websites from these devices. Plus when Apps incorporate browser controls they are using WebKit.

Further both Apple and Google are contributing actively to WebKit. It’s been an interesting combination of co-operation and competition. When new hardware devices come out, initially it tends only be accessible for Apps. But Google tends to very quickly add JavaScript APIs for the device to WebKit. Then Apple tends to follow suite quite quickly. Further each is driven to keep incorporating the latest version into their devices since they don’t want to let the other get ahead of them.

One of the worries of Google forking WebKit and going its own way is that we will lose the competitive nature of Apple versus Google that has been driving WebKit forwards.

Why Fork?

So why is Google forking WebKit? A lot of opinion on the Internet is that this is a strategy to sabotage Apple. I guess Google could be egomaniacal enough to think that WebKit will fail without them participating. But Apple is such a big company with so much money and talent, I think they can do just fine with WebKit, after all they did start it without Google’s help. Further I suspect the army of independent open source programmers that contribute to WebKit will continue to do so and won’t switch to Blink.

Google’s official reason is that the code in WebKit is getting too burdened with supporting code for Safari, Chrome and all the other various things it does. That if they take WebKit and remove anything that Chrome doesn’t use then they will have a smaller, faster and easier to develop code base. Basically they claim they want to move the HTML engine forwards more tightly coupled with their multi-processor architecture to improve security and performance. That doing this while supporting competing architectures within the same code base is getting harder and harder.

When Apple started WebKit and later when Google joined it, both Google and Apple were primarily worried about Microsoft and wanted to have Browser technology clearly superior to Internet Explorer. Now with their success, Microsoft is now pretty well non-existent in the mobile world. I think as a result Google isn’t feeling threatened by Microsoft anymore and is turning its attention to Apple. Generally relations between the two companies have been getting colder and colder in recent years.

Actually Google currently only uses the HTML and CSS rendering part of WebKit called WebKitCore. They stopped using the JavaScript component JavaScriptCore in favor of their own V8 JavaScript engine. The V8 JavaScript engine has been blowing away the competition in JavaScript benchmarks for some time now. In fact the V8 JavaScript engine is also the heart of Node.js the highly successful JavaScript server side processing framework. I think Google is looking to get the same sort of success out of Blink that they got from V8.

What’s the Problem?

The problem is for developers. Right now developing good web pages that run nicely anywhere means targeting IE, FireFox and WebKit which then covers the main HTML/CSS rendering engines. Unfortunately HTML and CSS are very complicated and quite subtle. Although all adhere to the published web standards, there are differences in interpretation. Also there are emergent properties that get exploited as features, things that aren’t really in the standard but have appeared in an implementation.

In the mobile world right now, developers have it easier since they can target Android, iOS, Blackberry, Tizen and Symbian by just targeting WebKit. This makes life much easier since you really can develop once and deploy pretty much anywhere. It will be a pity to lose this, and potentially quite expensive for smaller development organizations.

I imagine that many source code files will continue to be shared by WebKit and Blink. But for how long? When will we have to pay attention between differences between Blink based browsers and WebKit based browsers?

Summary

Although I find it appealing that Google is hoping to do for HTML/CSS rendering speed what it did for JavaScript execution speed with V8, I’m really worried that this is going to fragment HTML5 development for mobile devices. I tend to think this will cause more web developers to decide that if I need to develop Android and iOS separately then I may as well do both natively in Apps. To me this will be a sad further fragmentation and polarization of mobile developer communities.

Written by smist08

April 13, 2013 at 3:40 pm

The Singularity

with 3 comments

Introduction

In last week’s blog post, one of the topics covered was an exercise in predicting what things will be like in ten years. We didn’t discuss any negative impacts of technology like environmental collapse due to gross consumerism. The other thing that wasn’t discussed was the prospect of the so call technological singularity occurring in the next ten years. The singularity is defined as the point at which computers (or networks of computers) become self-aware and exceed human intelligence.

This has been a popular topic in Science Fiction for some time. Interestingly the term is often attributed to John von Neumann who spoke of “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

We’ve all felt how change has been accelerating. As change gets faster and faster, predicting the future becomes harder and harder. The idea behind the singularity is that you cannot predict what will happen on the other side of it. Basically as computers/networks become self-aware and more intelligent than us, then things will start to change so quickly that all our predictions will be out the door.

I think this could happen in the next ten years, there are many projections like the following chart that give good evidence that we should reach the prerequisite level of complexity between 2020 and 2040.

ss08_exponential_growth_large

Webmind

Robert J. Sawyer, the popular Canadian Science Fiction writer has an excellent trilogy of books, his WWW series consisting of Wake, Watch and Wonder which follow a scenario where the Internet becomes alive. This series is certainly a very positive view of this happening and I highly recommend reading this series (disclaimer: I haven’t read the third one yet).

I really like how Sawyer used cellular automata like the Game of Life as the model for how intelligence could emerge from the current internet. I tend to think this is thinking along the right track.

Verner Vinge

Mathematician and Science Fiction writer Verner Vinge wrote a very influential essay on the singularity here. A lot of ideas from the essay are woven into his Science Fiction novels like “A Fire Upon the Deep” or “Rainbow’s End”. I greatly enjoy Verner’s novels and highly recommend them.

Google

In fact companies like Google are actively working to make the singularity happen. Both Google founders Larry Page and Sergey Brin are driving projects within Google to achieve self-awareness and intelligence in the Google data centers. In fact both put in a lot of personal money to found the Singularity University.

You have to think that the company bringing self-driving cars to market, having personal concierge software like Google Now and with their giant data centers and huge resources are well positioned to bring the Singularity to life (or have they already done it?).

The Negative

Of course there are many Science Fiction works which portray a very negative vision of this happening. In particular the emergence of Skynet in the Terminator series, the enslavement of people as power generators in the Matrix series, as well as Hal in 2001. Generally these set up quite good action movies, but I’m not really sure the types of wars envisioned here are too likely. I tend to think that most negative outcomes for the future would be caused by our own doing, whether war or environmental collapse.

Accuracy of Predictions

Predicting the future has always been very inaccurate. We always predict things will happen much faster than they do. Putting years in novels like 1984 or 2001 quickly shows how slow things can develop. Interestingly back in the 60s for the original Star Trek, people thought we would have warp drive in a few years, but a talking computer that knows everything would be impossible and was quite implausible. Interesting how things do change.

I find news shows that make New Year’s predictions and look at the accuracy of last year’s predictions quite entertaining. Usually all the predictions from last year are wrong. Similarly if you study statistics and the accuracy of predicting trends by projecting graphs and such, you see that the mathematical inaccuracy grows extremely fast. So the graph of computer power above looks quite compelling, but believing the projection it makes is strictly an act of faith and intuition with no mathematical backing.

Is it Possible?

There is a lot of controversy about whether true human type self-aware intelligence is possible with just a Turing machine type computer. There is a lot of skepticism that some other secret sauce is required. Roger Penrose believes that our neurons actually aren’t just like computer logic gates, but that there are quantum effects going on that are necessary to go beyond a Turing machine.

I studied the transitions from stable simple systems to complex chaotic systems as part of my Master’s Degree. As dynamic systems make the transitions from stable simple predictable systems to chaotic systems, they don’t necessarily become completely random. It’s very common to get new stable emergent states that were completely unpredictable from the initial analysis.

I believe that self-aware intelligence is possible with just a Turing machine. That as our computing power and networks get more and more powerful and complex, that Chaos Theory will start to apply and that intelligence is in fact some sort of strange attractor that will eventually emerge.

Like we get amazing graphic images of Fractals from iterations of very simple equations, we get amazing unpredictable but stable complexity emerging. To me this will be the foundation for intelligence.

fractal3

Summary

Making predictions is fun, but usually not accurate. I find it fascinating to think about how intelligence might emerge on the Internet. It’s not just being left to emerge or evolve on its own, that in fact there are some very rich and powerful people putting quite large amounts of resources into making this happen.

I do think that once this happens (if it happens), that it will be a singularity and that we have no idea how things will progress past that point.

Written by smist08

March 9, 2013 at 8:16 pm

Voice Input and Concierge Services

with 2 comments

Introduction

Some of the most exciting new technologies appearing on mobile phones are around voice recognition and concierge or personal assistant type of applications. These include ambitious applications like Apple’s Siri, along with a number of initiatives from Google including Google Now and Google Voice Search.

The voice recognition by itself is a truly amazing technology, but this is only a fraction of the story. After the voice input is recognized the query is combined with other input, like your location, to determine a lot of context for what you are asking about, identifies the problem domain and gives a truly meaningful answer along with relevant data to correctly answer or respond to your query.

Of all the technologies on Star Trek, we don’t see any sign of a working warp drive or transporter, but being able to ask a computer anything on any topic and get a good answer, we seem to have that now. So perhaps if Star Trek IV was set another ten years ahead, then Scotty wouldn’t have had any trouble interacting with our primitive computers.

star_trek_4_apple_mac_plus1

Device or Service?

An incorrect assumption is that you can integrate apps running on your phone to these services. This is the wrong way to think about how they work. They aren’t a voice recognition/query engine running on your device. In fact they send all the (nearly) raw input to a major data center to process them. Even though there isn’t a device API for accessing Siri, developers have found clever ways around this, by putting clever things in the contact list and constructing special text messages, but again this is really just using Siri as voice recognition software. The real intent of Siri is much deeper; it’s really a task completion engine.

These engines are really taking your voice input and then mapping them to various problem domains which then talk to many APIs on the backend. The goal isn’t to run an app and then just provide a voice recognition engine that translates voice commands into regular app commands as if the user had typed them. The goal is really that you don’t need device apps. When you ask Siri a question, you don’t need a matching app running, if you ask about airline info, it gets it, if you ask about weather, it gets it. You don’t need to run the right app.

In a way a limitation of current mobile phones is the need to download and install so many apps. Do you really need all of these? Most of the apps on my phone are specialized query information gathering apps like weather, news and such. The real beauty of these new personal assistant type applications is that they eliminate the need for all these other apps. Wouldn’t a phone or tablet be much easier if you didn’t need to find and install all these apps? Isn’t this the original appeal of the Internet to PC users? You don’t need to install dozens of applications (which got more and more painful); all you needed was a Browser and nothing else. To some degree these personal assistant applications become a workable Browser for mobile devices, where you no longer need all these apps anymore. Sure there are some special purpose apps for playing games and performing specialized functions, but generally you can just use Siri, Google Voice Search or Google Now for most things that you probably use Apps for now. Sure these aren’t perfect yet, just like the original Netscape Browser wasn’t perfect, but they are getting there very quickly.

Integrating to ERP and CRM

OK, so we don’t integrate to these new services via Apps talking to APIs on devices, so if we want to integrate our CRM or ERP into say Siri, how do we do it? Suppose we want to ask Siri what is the status of an Order from a vendor, or we want to ask Siri what is the credit limit of a customer I’m about to visit?

The key is to have this information available on the Internet via RESTful Web Services like SData. The reason for RESTful Web Services is that they allow discovery by search engine spiders. Generally shortened URLs give the list of how to build the rest of the URL, this allows a general engine to discover all the data. RESTful Web Services are the new Internet standard and all these services are built to interact with them.

The key is for vendors (like Sage) to make the right agreements with these services, so that the data can be accessed in a secure way, and you aren’t doing something like exposing all your ERP data to the Internet in general. Security and the rules for who can access what are crucial. Standard sign-on mechanisms like OAuth are going to have to be used.

The other thing is that all this data must be in a central location. This means that any ERP or CRM data that is going to be available to these services must be sync’ed to a central cloud location. This then fits in with Sage’s connected services strategy of sync’ing key on-premise data to the cloud (of course if you are already running your CRM or ERP in the cloud then you can skip this step). I blogged about Sage’s Hybrid Cloud here. From Sage’s Hybrid Cloud we can expose the correct data via SData Web Services for anyone that wants to participate in these services. Then Sage can make the correct deals with the services and is responsible that all the security concerns are setup correctly.

This can then lead to a company’s employees and customers being able to make general inquiries into these services and for the right questions have them mapped to a problem domain in the ERP or CRM space, have the backend systems provide answers with relevant data added from the Hybrid Cloud.

None of these services would look into the Hybrid Cloud in real time, they all operate like Search Engines which are continuously polling sites and updating their master databases, then for performance reasons all the real queries are handled as highly optimized Big Data queries against a master search database, so that all questions are magically answered instantly.

Overtime the questions answered can become more and more sophisticated, incorporating more and more sources of business data. Perhaps you can ask Siri: What’s the best way to increase my company’s revenue? And then get back a useful answer.

Summary

I think these personal assistant type applications are going to become more and more prevalent in the mobile world (or even on regular computers). To me it’s exciting to consider participating in this and to think about all the questions that we can help answer.

Star-Trek-The-Original-Series-TV-1966-movie-props