Archive for the ‘Business’ Category
With Apple’s WWDC conference just wrapping up, I thought it might be a good time to meditate on a few of the current trends in the mobile world. I think the patent wars are sorting themselves out as Google and Apple settle and we are seeing a lot more competitive copying. Apple added a lot of features that competitors have had for a while as well as adding a few innovations unique to Apple.
The competitive fervor being shown in both the Google and Apple mobile camps is impressive and making it very hard for any other system to keep up.
Apple has had the iCloud for a while now, but with this version we are really seeing Apple leverage this. When Google introduced the Chromebook they used this video to show the power of keeping things in the Web. This idea has been copied somewhat by Microsoft. But now Apple has taken this to the next level by allowing you to continue from device to device seamlessly, so you can easily start an e-mail on your phone and then continue working on it on your MacBook. No having to e-mail things to yourself, it all just seamlessly works.
Apple also copied some ideas from Google Drive and DropBox to allow copying files across non-Apple devices like Windows as well as sharing documents between applications. So now this is all a bit more seamless. It’s amazing how much free cloud storage you can get by having Google, Microsoft, Apple and Dropbox accounts.
Generally this is just the beginning as companies figure out neat things they can do when your data is in the cloud. If you are worried about privacy or the NSA reading your documents, you might try a different solution, but for many things the convenience of this outweighs the worries. Perhaps a bigger worry than the FBI or NSA is how advertisers will be allowed to use all this data to target you. Apple has added some features to really enable mobile advertising, whether these become too intrusive and annoying has yet to be seen.
Copying is the Best Compliment
Apple has also copied quite a few ideas from Google, Blackberry and Microsoft into the new iOS. There is a lot more use of transparency (like introduced in Windows Vista). There is now a customizable and predictive keyboard adding ideas from Blackberry and Microsoft. Keyboard entry has been one of Apple’s weaknesses that it is trying to address. Similarly the drive option in the iCloud is rather late to the game.
Apps versus the Web
There is a continuing battle between native applications and web applications for accessing web sites. People often complain that the native mobile application only gives them a subset of what is available on the full web site, but then on the other hand the consensus is that the native mobile apps give a much better experience.
True web applications give a unified experience across all devices and give the same functionality and the same interaction models. This is also easier for developers since you only need to develop once.
However Apple is having a lot of success with apps. Generally people seem to find things easier in the Apple App store than in browsing and bookmarking the web. Apple claims that over half of mobile Internet traffic is through iOS apps now (but I’m not sure if this is skewed by streaming video apps like Netflix that use a disproportionate amount of bandwidth).
Yet another Programming Language
Rather than go down the road of Java and C#, Swift has tried to incorporate the ease of use of scripting languages, but still give you full control over the iOS API. How this all works out is yet to be seen, but it will be interesting if it makes iPhones and iPads really easy to program similar to the early PCs back in the Basic days.
The Internet of Things
Apple introduced two new initiatives, their Health Kit and Home Kit. Health kit is mostly to encourage adding medical sensing devices to your iPhone, whereas Home Kit is to extend iOS into devices around the home and to control them all from your iPhone.
The Health Kit is designed to centralize all your health related information in one central place. There is getting to be quite a catalog of sensors and apps to continuously track your location, speed, heart rate, pulse, blood pressure, etc. If you are an athlete, this is great information on your fitness level and how you are doing. Garmin really pioneered this with their GPS watches with attached heart rate monitors. I have a Garmin watch and it provides a tremendous amount of information when I run or cycle. I don’t think this is much use for the iPhone, which I always leave behind since I don’t want to risk it getting wet, but this might really take off if Apple really releases a smart watch this fall like all the rumors say.
Home Kit is a bit of reaction to Google buying Nest, the intelligent thermostat. Basically you can control all your household items from your phone, so you can warm up the house as you are driving home, or turn all the lights on and off remotely. We have a cottage with in-floor heating, it would be nice if we could remotely tell the house to start heating up in the winter a few hours before we arrive, right now it’s a bit cold when we first get there and turn on the heat. However with zoned heating we would need four thermostats and at $250 each, this is rather excessively expensive. I think the price of these devices has to come down quite a bit to create some real adoption.
There is a lot of concern about having all of these hacked and interfered with, but if they get the security and privacy correct, then these are really handy things to have.
Apple has introduced some quite intriguing new directions. Can Swift become the Basic programming languages for mobile devices? Will Health Kit and Home Kit usher in a wave of new wonderful intelligent devices? Will all the new refinements in iOS really help users have an even better mobile experience? Will native apps continue to displace web sites, or will web sites re-emerge as the dominant on-line experience? Lots of questions to be answered over the next few months, but it should be fun playing with tall these new toys.
With the recent Heartbleed security exploit in the OpenSSL library a lot of attention has been focused on how vulnerable our computer systems have become to data theft. With so much data travelling the Internet as well as travelling wireless networks, this has brought home the importance of how secure these systems are. With a general direction towards an Internet of Things this makes all our devices whether our fridge or our car possibly susceptible to hackers.
I’ll talk about Heartbleed a bit later, but first perhaps a bit of history with my experiences with secure computing environments.
My last co-op work term was at DRDC Atlantic in Dartmouth, Nova Scotia. In order to maintain security they had a special mainframe for handling classified data and to perform classified processing. This computer was located inside a bank vault along with all its disk drives and tape units. It was only turned on after the door was sealed and it was completely cut off from the outside world. Technicians were responsible for monitoring the vault from the outside to ensure that there was absolutely no leakage of RF radiation when classified processing was in progress.
After graduation from University my first job was with Epic Data. One of the projects I worked on was a security system for a General Dynamics fighter aircraft design facility. This entire building was built as a giant Faraday cage. The entrances weren’t sealed, but you had to travel through a twisty corridor to enter the building to ensure there was not line for radio waves to pass out. Then surrounding the building was a large protected parking lot where only authorized cars were allowed in.
Generally these facilities didn’t believe you could secure connections with the outside world. If such a connection existed, no matter how good the encryption and security measures, a hacker could penetrate it. The hackers they were worried about weren’t just bored teenagers living in their parent’s basements, but well trained and financed hackers working for foreign governments. Something like the Russian or Chinese version of the NSA.
Van Eck Phreaking
A lot of attention goes to securing Internet connections. But historically data has been stolen through other means. Van Eck Phreaking is a technique to listen to the RF radiation from a CRT or LCD monitor and to reconstruct the image from that radiation. Using this sort of technique a van parked on the street with sensitive antenna equipment can reconstruct what is being viewed on your monitor. This is even though you are using a wired connection from your computer to the monitor. In this case how updated your software is or how secure your cryptography is just doesn’t matter.
Everything is Wireless
It seems that every now and then politicians forget that cell phones are really just radios and that anyone with the right sort of radio receiver can listen in. This seems to lead to a scandal in BC politics every couple of years. This is really just a reminder that unless something is specifically marked as using some sort of secure connection or cryptography, it probably doesn’t. And then if it doesn’t anyone can listen in.
It might seem that most communications are secure now a days. Even Google search switches to always use https which is a very secure encrypted channel to keep all your search terms a secret between yourself and Google.
But think about all the other communication channels going on. If you use a wireless mouse or a wireless keyboard, then these are really just short range radios. Is this communications encrypted and secure? Similarly if you use a wireless monitor, then it’s even easier to eavesdrop on than using Van Eck.
What about your Wi-Fi network? Is that secure? Or is all non-https traffic easy to eavesdrop on? People are getting better and better at hacking into Wi-Fi networks.
In your car if you are using your cell phone via blue tooth, is this another place where eavesdropping can occur?
Heartbleed is an interesting bug in the OpenSSL library that’s caused a lot of concern recently. The following XKCD cartoon gives a good explanation of how a bug in validating an input parameter caused the problem of leaking a lot of data to the web.
At the first level, any program that receives input from untrusted sources (i.e. random people out on the Internet) should very carefully and thoroughly valid any input. Here you can tell it what to reply and the length of the reply. If you give a length much longer than what was given then it leaks whatever random contents of memory were located here.
At the second level, this is an API design flaw, that there should never have been such a function with such parameters that could be abused thus.
At the third level, what allows this to go bad is a performance optimization that was put in the OpenSSL library to provide faster buffer management. Before this performance enhancement, this bug would just have caused an application fault. This would have been bad, but been easy to detect and wouldn’t have leaked any data. At worst it would have perhaps allowed some short lived denial of service attacks.
Mostly exploiting this security hole just returns the attacker with a bunch of random garbage. The trick is to automate the attack to repeatedly try it on thousands of places until by fluke you find something valuable, perhaps a private digital key or perhaps a password.
The open source community makes the claim that open source code is safer because anyone can review the source code and find bugs. So people are invited to do this to OpenSSL. I think Heartbleed shows that security researcher became complacent and weren’t examining this code closely enough.
The code that caused the bug was checked in by a trusted coder, and was code reviewed by someone knowledgeable. Mistakes happen, but for something like this, perhaps there was a bit too much trust. I think it was an honest mistake and not deliberate sabotage by hackers or the NSA. The source code change logs give a pretty good audit of what happened and why.
Should I Panic?
In spite of what some reporters are saying, this isn’t the worst security problem that has surfaced. The holy grail of hackers is to find a way to root computers (take them over with full administrator privileges). This attack just has a small chance of providing something to help on this way and isn’t a full exploit in its own right. Bugs in Java, IE, SQL Server and Flash have all allowed hackers to take over peoples computers. Some didn’t require anything else, some just required tricking the user into browsing a bad web site. Similarly e-mail or flash drive viruses have caused far more havoc than this particular problem. Another really on-going security weakness is caused by government regulations restricting the strength of encryption or forcing the disclosure of keys, these measures do little to help the government, but they really make the lives of hackers easier. I also think that e-mail borne viruses have wreaked much more havoc than Heartbleed is likely to. But I suspect the biggest source of identity theft is from data recovered from stolen laptops and other devices.
Another aspect is the idea that we should be like gazelle’s and rely on the herd to protect us. If we are in a herd of 100 and a lion comes along to eat one of us then there is only a 1/1000 chance that it will be me.
This attack does highlight the importance of some good security practices. Such as changing important passwords regularly (every few months) and using sufficiently complex or long passwords.
All that being said, nearly every website makes you sign in. For web sites that I don’t care about I just use a simple password and if someone discovers it, I don’t really care. For other sites like personal banking I take much more care. For sites like Facebook I take medium care. Generally don’t provide accurate personal information to sites that don’t need it, if they insist on your birthday, enter it a few days off, if they want a phone number then make one up. That way if the site is compromised then they just get a bunch of inaccurate data on you. Most sites ask way too many things. Resist answering these or answer them inaccurately. Also avoid overly nosey surveys, they may be private and anonymous, unless hacked.
The good thing about this exploit, seems to be that it was discovered and fixed mostly before it could be exploited. I haven’t seen real cases of damage being done. Some sites (like the Canadian Revenue Services) are trying to blame Heartbleed for unrelated security lapses.
Generally the problems that you hear about are the ones that you don’t need to worry so much about. But again it is a safe practice to use this as a reminder to change your passwords and minimize the amount of personally identifiable data out there. After all dealing with things like identity theft can be pretty annoying. And this also help with the problems that the black hat hackers know about and are using, but haven’t been discovered yet.
You always need to be vigilant about security. However it doesn’t help to be overly paranoid. Follow good on-line practices and you should be fine. The diversity of computer systems out there helps, not all are affected and those that are, are good about notifying those that have been affected. Generally a little paranoia and good sense can go a long way on-line.
In a previous blog article I talked about business continuity, what you need to do to keep Sage 300 ERP up and running with little or no downtown. However I mushed together two concepts, namely keeping a service highly available along with having a disaster recovery plan. In this article I want to separate these two concepts apart and consider them separately.
We’ve had to give these two concepts a lot of thought when crafting our Sage 300 Online product offering, since we want to have this service available as close to 100% as possible and then if something truly catastrophic happens, back on its feet as quickly as possible.
There is some common terminology which you always see in discussions on this topic:
RPO – Recovery Point Objective: this is the maximum tolerable period in which data might be lost due to a major incident. So for instance if you have to restore from a backup, how long ago was that backup made.
RTO – Recovery Time Objective: this is the duration of time within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences. For instance if a computer fails, how long can you wait to replace it.
HA – High Availability: usually concerns keeping a system running with little or no downtime. This doesn’t include scheduled downtime and it usually doesn’t include a major disaster like an earthquake eating a datacenter.
DR – Disaster Recovery: this is the process, policies and procedures that are related to preparing for recovery or continuation of technology infrastructure which are vital to an organization after a natural or human-induced disaster.
HA means creating a system that can keep running when individual components fail (no single point of failure), like one computer’s motherboard frying, a power supply failing or a hard disk failure. These are reasonably rare events, but often systems in data centers run on dozens of individual computers and things do fail and you don’t want to be down for a day waiting for a new part to be delivered.
Of course if you don’t mind being down for a day or two when things fail, then there is no point spending the money to protect against this. Which is why most businesses set RPO and RTO targets for these type of things.
Some of this comes down to procedures as well. For instance if you have all redundant components but then run Windows Update on them all at once, they will reboot all at once bringing your system down. You could schedule a maintenance windows for this, but generally if you have redundant components you can Windows update the first and when its fine and back up, then you do the secondary.
If you are running Sage ERP on a newer Windows Server and using SQL Server as your database then there are really good hardware/software combinations of all the standard components to give you really good solid high availability. I talked about some of these in this article.
This usually refers to having a tested plan to spin up your IT infrastructure at an alternate site in the case of a major disaster like an earthquake or hurricane wiping out you currently running systems.
Again depending on your RPO/RTO requirements will depend on how much money you spend on this. For instance do you purchase backup hardware and have it ready to go in an alternate geographic region (far enough away that the same disaster couldn’t take out both locations)?
For sure you need to have complete backups of everything that are stored far away that you can recover from. Then it’s a matter of acquiring the hardware and restoring all your backups. Often people are storing these backups in the cloud these days, this is because cloud storage has become quite inexpensive and most cloud storage solutions provide redundancy across multiple geographies.
The key point here is to test your procedure. If your DR plan isn’t tested then chances are it won’t work when it’s needed. Performing a DR drill is quite time consuming, but really essential if you are serious about business continuity.
One of the attractions of the cloud is having a lot of these things done for you. Sage 300 Online handles setting up all its systems HA, as well as having a tested DR plan ready to implement. Azure helps by having many data centers in different locations and then having a lot of HA and DR features built into their components (especially the PaaS ones). This then removes a lot of management and procedural headaches from running your business.
If a data center is completely wiped out, then the decision to execute the DR plan is easy. However the harder decision comes in when the primary site has been down for a few hours, people are working hard to restore service, but it seems to be dragging on. Then you can have a hard decision to kick in the DR plan or to wait to see if the people recovering the primary can be successful. These sort of things are often caused by electrical problems, or problems with large SANs.
One option is to start spinning up the alternative site, restoring backups if necessary and getting ready, so when you do make the decision you can do the switch over quickly. This way you can often delay the hard decision and give the people fixing the problem a bit more time.
Having a good tested DR plan is the first step, but businesses need to realize that if a major disaster like an earthquake wiping out a lot of data centers, then many people are going to activate their DR plans at once. This scenario won’t have been tested. We could easily experience a cascading outage from the high usage that causes many other sites to go down, until the initial wave passes. Generally businesses have to be prepared to not receive good service until everyone is moved over and things settle down again.
Responsible companies should have solid plans for both high availability and disaster recovery. At the same time they need to compare the cost of these against the time they can afford to be down against the probability of these scenarios happening to them. Due to the costs and complexities of handling these scenarios, many companies are moving to the cloud to offload these concerns to their cloud application provider. Of course when choosing a cloud provider make sure you check the RPO and RTO that they provide.
In a few previous blog posts I’ve been talking about attracting new employees whether through office design, advice for someone starting their career or corporate mobility. In this article I’ll be looking at some ideas on how to keep existing employees. Generally the value of a high tech company largely depends on the IP contained in the heads of the employees and growth prospects depend on their ability to execute.
High Costs of Hiring and Training New People
Hiring new employees is quite time consuming and a slow process. Especially in todays job market which is very hot with all the venture capital that is freely flowing right now. Is this a bubble that will shortly burst? Either way hiring is fairly slow right now. Then any new employee has to take quite a bit of time to learn your ways of doing things and to become familiar with your existing programs and systems.
On the converse new employees do being new ideas, new experiences and new perspectives that greatly help an organization. Having a stream of new employees is very beneficial, but when it becomes a torrent then things get tricky.
To retain employees, it isn’t just a matter of higher salaries (though that works well for me), but understanding people’s motivations which may not be intuitive. A good video on people’s motivations is this one. Motivations are really quite complex and much more is involved than just money. This video’s thesis is that you need to pay enough money to take money off the table as an issue, then the priorities become:
Autonomy: people want to be self-directed, they want control over what they do. This is one of the reasons that unstructured time is so successful at so many organizations.
Mastery: people want to have mastery at what they are doing. They need time to learn and practice what they are doing in order to raise their work to a higher level. Often in technical organizations, this is why frequently moving people between projects causes so much dissonance. People aren’t just cogs that do repetitive work that are all interchangeable. This is often confused with resistance to change which is something quite different.
Purpose: People want to make a contribution. They want to see their work being used by happy customers. They want to see their work making other people’s lives better. Putting out poor quality products that annoy people will cause employees to want to leave an organization. Having corporate policies that violate customer’s privacy or do other semi-legal immoral corporate activities will disengage the workforce.
If a company pays a competitive salary then these items will be very important in engaging and retaining employees. But there are still other factors.
One of my favorite ways to be retained by an employer are golden handcuffs. These are benefits like stock options or future bonuses that you have to remain an employee to collect. Often these can become quite valuable making it a very difficult decision to leave. For instance stock options vest over five years and you can retain them for ten. If your company is growing and its stock is going up then these can become very valuable and walking away from them is as difficult as getting out of handcuffs. Even if you company isn’t public, having these in the hope of going public is a great retention tactic.
Technical employees like programmers value challenging work where they get to use newer technologies. This keeps people interested via continuous learning and people feel secure in their profession since they know their skills are up to date.
A lot of times technical people leave an organization because they feel their skills are getting dated and that it’s hard to learn and practice newer practices.
When performing employee surveys, often the key answers given to the question of why people stay is that they like their co-workers and/or they like their boss. To some degree this comes down to having a very positive work environment. Ensuring everyone treats everyone else with respect and that bad behavior to other people isn’t tolerated.
Another key aspect is when hiring to consider how people will fit in to the current teams and often to give team members a chance to participate in the job interview process to give their input on this.
Probably the most important relationship is between an employee and his boss and this means that ensuring managers are properly trained and that you have good managers is extremely important.
Having good vertical communications in an organization is critical. A lot of times when people are having problems or not fitting in, they are saying so, just no one is listening. Many times people leave due to misunderstandings or frustrations that they aren’t being heard. Having good clear communications channels is crucial.
Also an organization needs to ensure that all the employees know what the corporate priorities are and also what is the reasoning behind these. People won’t be engaged if they don’t understand why a company is doing something and in fact will often act against it.
Another good practice is to have good coaching and mentoring programs within the organization. These can really help with communications and employee development.
Don’t Reward the Bad
On the converse, you don’t want to retain people at any cost. If people aren’t performing, aren’t engaged or exhibit bad behavior, don’t reward them. Often company’s give out bonus’s anyway because they are worried about losing the employee. But I think in some cases it’s better for everyone if the employee finds a different opportunity. You especially don’t want to do this year after year or people just won’t have confidence in your rewards system.
Retaining employees doesn’t have to be hard. Generally employees are motivated by things that are also good for the company like pursuing innovation, pursuing learning and staying up to date. Generally a healthy happy workforce is also a productive workforce, so many of these items are in everyone’s interest. When companies lose sight of this, they get themselves into trouble.
My wife, Cathalynn, and I were recently discussing issues with people moving to other cities to pursue their careers and the hard decisions that were involved in doing this. My nephew, Ian Smith, is just starting his career and when choosing where to work has to consider what it takes to grow in the role he eventually accepts. When I started at Computer Associates, if you wanted to move up in the organization past a certain point, then you had to move to the company headquarters in New York. Similarly, when Cathalynn was working at Motorola, the upwardly mobile had to relocate to Schaumburg, Illinois.
From Cathalynn Labonté-Smith
Recently, Vancouver hosted a Heritage Classic hockey game at BC Place as have many cities across Canada. An outdoor rink facsimile was made inside an indoor venue to recreate a 1915 game complete with original uniforms and “snow”. The plan was to retract the ceiling on the dome but a torrential downpour kept the giant umbrella deployed. Despite the nostalgia of the game the Vancouver Canucks and Ottawa Senators were playing for real—this game counted for NHL points, so the integrity of the ice had to be maintained.
We’ve all heard of the glass ceiling. Indeed, yesterday (March 8th) it was International Women’s Day—a day to reflect on all aspects of women’s’ equality and well-being. In the corporate world, how are we doing? According to Catalyst only 4.6% of Fortune 1000 companies have women CEOs (http://www.catalyst.org/knowledge/women-ceos-fortune-1000).
We’ve all heard of hitting the glass ceiling, however; living on the West Coast working in the high technology sector we have what I call an umbrella ceiling that applies to both genders. Umbrella in the down or sun position–you are blessed with a lifestyle that promotes health and well-being with a year-round outdoor playground and cultural diversity. Umbrella in the up or rain position—you are blocked from moving on to a top job within any corporation that has a head office outside of British Columbia you have to leave. We’ve been to many a tearful going away party. But then if you stay as the Smiths have where are roots and family are, you many spend your weekends hiking, snowboarding, cycling, gardening, wine-tasting, cross-border shopping to Seattle and in many other wonderful pursuits, so that’s cool too.
Does it have to continue to be this way? With all the technology like Skype, other teleconferencing software, cloud applications, mobile phones, portals, access to travel and other collaborative tools that are available why do corporations still tend to centralize top officers in one location? Or, can companies truly embrace the mobile workforce including more females at the CEO level. Are they missing out on or losing top talent for this-is-the-way-we’ve-always-done-itism?
I’m turning this over to the expert, Mr. Steve himself. Cat out.
Physical versus Virtual Offices
A lot of discussion comes down to how important is face-to-face interaction. How much can be done virtually via Skype, e-mail, telepresence, chat and other collaborative technologies?
My own experience is that there are a lot of communication problems that can easily be cleared up face-to-face. Often without direct interaction, misunderstandings multiply and don’t get resolved. Probably the worst for this is e-mail. Generally, programmers don’t like to talk on the phone and so will persist with e-mail threads that lead nowhere for far too long rather than just picking up the phone and resolving the issue.
But with video calls so routine can much be handled this way instead and physical meetings kept to a minimum? Another thing that limits interactions is living in different time zones and how much time you have to interact. For example, I have days bookended by early morning and late evening conference calls.
Generally, office design has improved over the years as well to better facilitate team work and collaboration. If you aren’t in this environment are you as productive as the people that are?
Tim Bray leaves Google to stay in Vancouver
A recent high profile case of this was Tim Bray who worked at Google but lives in Vancouver. He gave a quick synopsis on his blog here. Google has a reputation as a modern web cloud company, and yet here is a case where having someone physically present is the most important qualification for the job. If Google can’t solve this problem, does anyone else have a chance?
Though personally it seems that Tim accepted the position at Google with the assumption of moving to California, so it seems a bit passive aggressive, then staying in Vancouver and just pretending he would move.
Mobility of CEOs
The ultimate metric of all this is how mobile is the CEO of a company. Does the CEO have to physically be present in the corporate headquarters for a significant percentage of their time? Does the CEO have to have a residence in the same city as the corporate headquarters? Is even the idea of a physical corporate headquarters relevant anymore in today’s world?
Many top executives spend an awful lot of their time on airplanes and in hotels. To some degree does it really matter where they live? After for modern global companies often to have the necessary face to face time with all the right people can’t be done from the corner office. Is the life of an executive similar to the life of George Clooney in Up in the Air?
I think if the CEO is in a fixed location then the upwardly mobile are going to be attracted to that location like moths to a flame. I think there is a strong fear in people of being out of the loop and for executives this can be quite career limiting.
I tend to think that face-to-face interaction and working together physically as a team has a lot of merit. Just breaking down the barriers to communications in this sort of tight knit environment can still be challenging.
I find that working remotely works very well for some people. But these people have to be strongly self-motivated and have to be able to work without nearly as much direct supervision or oversight.
I’m finding that the tools for communicating remotely are getting better and better and that this does then allow more people to work remotely, but at this point anyway, we can’t go 100% down this road.
If you have any thoughts on this, leave a comment at the end of the article.
Unstructured time is becoming a common way to stimulate innovation and creativity in organizations. Basically you give employees a number of hours each week to work on any project they like. They do need to make a proposal and at the end give a demo of working software. The idea is to work on projects that developers feel are important and are passionate about, but perhaps the business in general doesn’t think is worthwhile, too risky or has as a very low priority. Companies like Google and Intuit have been very successful at implementing this and getting quite good results.
Unstructured Time at Sage
The Sage Construction and Real Estate (CRE) development team at Sage has been using unstructured time for a while now. They have had quite a lot of participation and it has led to products like a time and expense iPhone application. Now we are rolling out unstructured time to other Sage R&D centers including ours, here in Richmond, BC.
At this point we are starting out slowly with 4 hours of unstructured time a sprint (every two weeks). Anyone using this needs to submit a project proposal and then do a demo of working code when they judge it’s advanced enough. The proposals can be pretty much anything vaguely related to business applications.
The goal is for people to work on things they are passionate about. To get a chance to play with new bleeding edge technologies before anyone else. To develop that function, program or feature that they’ve always thought would be great, but the business has always ignored. I’m really looking forward to what the team will come up with.
Crazy Projects at Google
Our unstructured time needs to be used for business applications, but I wonder what unstructured time is like at Google where they seem to come up with things that have nothing to do with search or advertising. Is it Google’s unstructured time that leads to self-driving cars, Google Glasses, military robots, human brain simulations or any of their many green projects. Hopefully these get turned into good things and aren’t just Google trying to create SkyNet for real. Maybe we’ll let our unstructured time go crazy as well?
I’m a big fan of Neal Stephenson, and recently read his novel Anathem. Neal’s novels can be a bit off-putting since they are typically 1000 pages long, but I really enjoy them. One of the themes in Anathem are monasteries occupied by mathematicians that are divided up into groups by how often they report their results to the outside world. The lower order reports every year, next is a group that reports every ten years, then a group that reports every 100 years and finally the highest group that only reports every 1000 years. These groups don’t interact with anyone outside their order except for the week when they report and exchange information/literature with the outside world. This is in contrast to how we operate today where we are driven by “internet time” and have to produce results quickly and ignore anything that can’t be done quickly.
So imagine you could go away for a year to work on a project, or go away for ten years to work on something. Perhaps going away for 100 years or 1000 years might pose some other problems that the monks in the novel had to solve. The point being is to imagine what you could accomplish if you had that long? Would you use different research approaches and methods than we use typically today? Certainly an intriguing prospect contrasting where we currently need to produce something every few months.
So why am I talking about Anathem and unstructured time together? Well one problem we have is how do you get started on big projects with lots of risk? Suppose you know we need to do something, but doing it is hard and time consuming? Every journey has to start with the first step, but sometimes making that first step can be quite difficult. I’ve had the luxury of being able to do unstructured time for some time, because I’m a software architect and not embedded in an agile sprint team. So I see technologies that we need to adopt but they are large and won’t be on Product Manager’s road maps.
So I’ve done simple POC’s in the past like producing a mobile app using Argos. But more recently I embarked on producing a 64-Bit version of Sage 300. This worked out quite well and wasn’t too hard to get going. But then I got ambitious and decided to add Unicode into the mix. This is proving more difficult, but is progressing. The difficulty with these projects is that they involve changing a large amount of the existing code base and estimating how much work they are is very difficult. As I get a Unicode G/L going, it becomes easier to estimate, but I couldn’t have taken the first step on the project without using unstructured time.
Part of the problem is that we expect our Agile teams to accurately estimate their work and then rate them on how well they do this (that they are accountable for their estimates). This has the side effect that they are then very resistant to work on things that are open ended or hard to estimate. Generally for innovation to take hold, the performance management system needs a bit of tweaking to encourage innovation and higher risk tasks, rather than only encouraging meeting commitments and making good estimates.
Now unlike Anathem, I’m not going to get 100 years to do this or even 10 years. But 1 year doesn’t seem so bad.
Now that we are adding unstructured time to our arsenal of innovation initiatives, I have high hopes that we will see all sorts of innovative new products, technologies and services emerge out of the end. Of course we are just starting this process, so it will take a little while for things to get built.
Right now we have our nephew Ian living with us as he takes a Lighthouse Labs developer boot camp program in Ruby on Rails and Web Programming. This is a very intense course that has 8 weeks instruction and then a guaranteed internship of at least 4 weeks with a sponsoring company. A lot of this is an immersion in the current high tech culture that has developed in downtown Vancouver. This corresponds with myself working to expand the Sage 300 ERP development team in Richmond and our hiring efforts over the past several months. This article is then based on a few observations and experiences around these two happenings.
Sage 300 ERP has been around for over thirty years now and this has caused us to have quite a few generations of programmers all working on the product. Certainly over this time the various theories of what a high tech office should look like and what a talented programmer wants in a company has changed quite dramatically. As Sage moves forwards we need to change with the times and adopt a lot of these new ways of doing things and accommodate these new preferred lifestyles.
Generally people go through three phases of their career, starting single, no kids, renting to transitioning to married, home ownership and eventually kids to kids leaving home and considering retiring. Of course these days there can be some major career changes along the way as industries are disrupted and people need to retrain and reeducate themselves. Every office needs a good mix, to build a diverse, energetic and innovative culture, which has experience but is still willing to take risks.
Offices or No Offices
The ambition was to have as much privacy as possible which usually translated to high cube walls, other barriers and the ambition to one day move into an office. At the time Microsoft advertised that on their campus every employee got an office, so they could concentrate and think to be more effective at their work. I visited the Excel team at this time and they had two buildings packed with lots of very small offices which led to long narrow claustrophobic hallways.
A lot has changed since then. Software development has much more adopted the Scrum/Agile model where people work together as a team and social interactions are very important. Further as products move to the cloud, the developers need to team up with DevOps and all sorts of other people that are crucial for their product’s success.
Now most firms adopt more open office approach. There are no permanent offices, everyone works together as a team.
There is a lot of debate about which is better. People used to more privacy of offices and cubes are loathe to lose that. People that have been using the open office approach can’t imagine moving back to cubes. Also with more people working a percentage of their time from home, a permanent spot at the offices doesn’t always make sense.
Downtown versus the Suburbs
When I started with CA the office was located in town near Granville Island. This was a great location, central, many good restaurants, and easily accessible via transit. Then we moved out to Richmond to a sprawling high tech park like many of the similar companies in the 90s. These were all sprawling landscapes of three story office buildings each one with a giant parking lot surrounding it. All very similar whether in Richmond, Irvine, Santa Clara or elsewhere.
Now the trend is reversing and people are moving back to downtown. Most new companies are located in or near downtown and several large companies have setup major development centers in town recently. Now the high tech parks in the suburbs are starting to have quite a few vacancies.
The Younger Generation
A lot of this is being driven by the twenty-something generation. What they look for in a company is quite different today than what I looked for when I started out. There are quite a few demographic changes as well as lifestyle changes that are driving this. A few key driving factors are:
- The number of young people getting drivers licenses and buying cars is shrinking. There are a lot of reasons for this. But people who can’t drive have trouble getting to the suburbs.
- People are having children later in life. Often putting it off until their late thirties or even forties.
- City cores are being re-vitalized. Even Calgary and Edmonton are trying to get urban sprawl under control.
- Real estate in the desirable high tech centers like San Francisco, Seattle or Vancouver is extremely expensive. Loft apartments downtown are often the way to go.
- Much more work is done at home and if coffee shops.
This all makes living and working downtown much more preferable. It is also leading to people requiring less space and looking for more social interactions.
Hiring that Younger Generation
To remain competitive a company like Sage needs to be able to hire younger people just finishing their education. We need the infusion of youth, energy and new ideas. If a company doesn’t get this then it will die. Right now the hiring market is very competitive. There is a lot of venture capital investment creating hot new companies, many existing companies are experiencing good growth and generally the percentage of the economy driven by high tech is growing. Another problem is that industries like construction, mining and oil are booming, often hiring people at very high wages before they even think about post-secondary education.
What we are finding is that many young people don’t have cars, live downtown and are looking to work in a cool open office concept building.
We are in the process of converting our offices to a more modern open office environment. We do allow people to work at home some days. Maybe we will even be able to move back downtown once the current lease expires? Or maybe we will need to create a satellite office downtown.
Generally we have to become more involved with both the educational institutions by hiring co-op students and other interns. We need to participate in more activities of the local developer and educational community like the HTML500. We need to ensure that Sage is known to the students and that they consider it a good career path to embark on. Often hiring co-op students now can lead to regular full time employees later.
Since Sage has been around for a long time and has a large solid customer base, we offer a stable work environment. You know you will receive your next pay check. Many startups run out of funding or otherwise go broke. Often while the job market is hot, young people don’t worry about this too much, but as you get into a mortgage, this can become more important.
The times are changing and not only do our developers need to keep retraining and learning how to do things differently, but so do our facilities departments, IS departments and HR departments. Change is often scary, but it is also exciting and stops life from becoming boring.
Personally, I would much rather work downtown (I already live there). I think I will be sad when I give up my office, but at the same time I don’t want to become the stereotypical old person yelling at the teenagers to get off my lawn. Overall I think I will prefer a more mobile way of working, not so tied to my particular current office.
My first computer was an Apple II Plus, which didn’t even support lower case characters. Everything was upper case. To do word processing you used special characters to change case. Now we expect our computer to not just handle upper and lower case characters, but accented characters, special symbols, all the Asian language characters, all the Arabic characters and everything else.
In the beginning there was ASCII which allowed computers to encode the alphabet, numbers and the common typewriter characters, all 127 of them. Then we added another 127 characters for accented characters. But there were quite a few different accented characters so we had a standard first 127 characters and then various options for the upper 127 characters. This allowed us to handle most European languages on computers. Then there was the desire to support Chinese characters which number in the tens of thousands. So the idea came along to represent these as two bytes or 16 bits. This worked well, but it still only supported one language at a time and often ran out of characters. In developing this there were quite a few standards and quite a bit of incompatibility of moving files containing these between computers systems. But generally the first 127 characters were the original ASCII characters and then the rest depended on the code page you chose.
To try to bring some order to this mess and make the whole problem easier, Unicode was invented. The idea here was to have one character set that contained all the characters from all the languages in the world. Sounds like a good idea, but of course Computer Scientists underestimated the problem. They assumed this would be at most 64K characters and that they could use 2 bytes to represent each character. Like the 640K memory barrier, this turned out to be quite a bad idea. In fact there are now about 110,000 Unicode characters and the number is growing.
Unicode specifies all the characters, but it allows for different encodings. These days the two most common are UTF8 and UTF16. Both of these have pros and cons. Microsoft chose UTF16 for all their systems. Since I work with Sage 300 and since we are trying to solve this on Windows that is what we will discuss in this article. To convert Sage 300 to Unicode using UTF8 would probably have been easier since UTF8 was designed to give better compatibility with ASCII, but we live in a Windows UTF16 world where we want to interact well with SQL Server and the Windows API.
Microsoft adopted UTF16 because they felt it would be easier, since basically each string became twice as long since every character was represented by 2 bytes. Hence memory doubled everywhere and it was simple to convert. This was fine except that 2 bytes doesn’t hold every Unicode character anymore, so some characters actually take 2 16-byte slots. But generally you can mostly predict the number of characters in a given amount of memory. It also lends itself better to just using array operations rather than having to go through strings with next/previous operations.
Windows took the approach that to maintain compatibility they would offer two APIs, one for ANSI and one for Unicode. So any Windows API call that takes a string as a parameter will have two versions, one ending A (for ANSI) and one ending in W (for Wide). Then in Windows.h if you compile with UNICODE defined then it uses the W version, else it uses the A version. This certainly adds a lot of pollution to the Windows API. But they maintained compatibility with all pre-existing programs. This was all put in place as part of Win32 (since recompiling was necessary).
For Sage 300 we’ve resisted going all in on Unicode, because we don’t want to double the size of our API and maintain that for all time, and if we do release a Unicode version then it will break every third party add-in and customization out there. We have the additional challenge that Unicode doesn’t work very well in VB6.
But with our 64 Bit version, we are not supporting VB6 (which will never be 64Bit) and all third parties have to make changes for 64 Bit anyway, so why not take advantage of this and introduce Unicode at the same time? This would make the move to 64 Bit more work, but hopefully will be worth it.
Why Switch to Unicode
Converting a large C/C++ application to Unicode is a lot of work. Why go to the effort? Sage 300 has had a traditional and simplified Chinese versions for a long time. What benefits does Unicode give us over the current double byte system we support?
One is that in double byte, only one character set can be installed on Windows at a time. This means for our online version we need separate servers to host the Chinese version. With Unicode we can support all languages from one set of servers, we don’t need separate sets of servers for each language group. This makes managing the online server farm much easier and much more uniform for upgrading and such. Besides our online offerings, we have had customers complain that when running Terminal Server they need separate ones for various branch offices in different parts of the world using different languages.
Another advantage that now we can support mixtures of script, so users can enter Thai in one field, Arabic in another and Chinese in another. Perhaps a bit esoteric, but it could have uses for optional fields where there are different ones for different locales.
Another problem we tend to have is with sort orders in all these different incompatible multi-byte character systems. With Unicode this becomes much more uniform (although there are still multiple of these) and much easier to deal with. Right now we avoid the problem by limiting key fields to upper case alphanumeric. But perhaps down the road with Unicode we can relax this.
A bit advantage is ease of setup. Getting the current multi-byte systems working requires some care in setting up the Windows server that often challenges people and causes problems. With Unicode, things are already setup correctly so this is much less of a problem.
Converting Sage 300
SQL Server already supports Unicode. Any UI technology newer than VB6 will also support Unicode. So that leaves our Business Logic layer, database driver and supporting DLLs. These are all written in C/C++ and so have to be converted to Unicode.
We still need to maintain our 32-Bit non-Unicode version and we don’t want two sets of source code, so we want to do this in such a way that we can compile the code either way and it will work correctly.
At the lower levels we have to use Microsoft’s tchar.h file which provides defines that will compile one way when _UNICODE is defined and another when it isn’t. This is similar to how Windows.h works for the Windows runtime, only it does it for the C runtime. For C++ you need a little extra for the string class, but we can handle that in plustype.h.
One annoying thing is that to specify a Unicode string in C, you do l”abc”, and with the macro in tchar.h, you change it to _T(“abc”). Changing all the strings in the system this way is certainly a real pain. Especially since 99.99% of these will never contain a non-ASCII character because they are for debugging or logging. If Microsoft had adopted UTF8 this wouldn’t have been necessary since the ASCII characters are the same, but with UTF16 this, to me is the big downside. But then it’s pretty mechanical work and a lot of it can be automated.
At higher levels of Sage 300, we rely more on the types defined in plustype.h and tend to use routines form a4wapi.dll rather than using the C runtime directly. This is good, since we can change these places to compile for either and hide a lot of the details from the application programmer. The other is that we only need to convert the parts of the system that deal with the database and the parts that deal with string handling (like error messages).
One question that comes up is what will be the length on fields in the database? Right now if it’s 60 characters then its 60 bytes. Under this method of converting the application the field will be 60 UTF16 characters for 120 bytes. (This is true if you don’t use the special characters that require 4 bytes, but most characters are in the standard 64K block).
Moving to both 64 Bits and Unicode is quite an exciting prospect. It will open up the doors to all sorts of advanced features, and really move our application ahead in a major way. It will revitalize the C/C++ code base and allow some quite powerful capabilities.
As a usual disclaimer, this article is about some research and proof of concept work we are doing and doesn’t represent a commitment as to which future version or edition this will surface in.
Last weekend I visited my parents in Victoria and my mom mentioned that she had finally used up all the computer punch cards I had left her when I graduated U-Vic. She likes them because they are more solid than paper but lighter than cardboard and are ideal for using as shopping lists and such. To have lasted this long shows how many cards I needed to do all my first and second year Computer Science courses back then at U-Vic.
This got me to thinking on how entering data into computers has changed over my career. Data entry is changing at an even faster rate these days, so I thought it might be fun to look back and to look forwards as well.
I’m not sure if this makes me appear very old, or shows how slow educational institutions adopt new technology. Not only was I the last first year computer science class to have to use punch cards, but I was also the last year when you weren’t allowed to use calculators in the Provincial exams and had to use a slide rule.
Basically the terminal printed what you typed and sent it to the computer when you hit enter and then would echo anything sent back. Rather primitive. Certainly was different editing files this way. Back then Basic used line numbers and you edit lines by specifying what you wanted done to a specific line by number.
IBM Punched Cards
Then I went to the University of Victoria, which was a step backwards. Rather than a nice online terminal like the LA36, we had to enter data via punched cards and then receive the output later from a managed line printer.
You had to be careful what you typed since each run took quite a bit of time and used up money from your account. You got good at using functions like duplicating cards up to a point and were always very careful not to drop them. Given the nature of the medium, it was surprisingly robust, in that the cards were actually pretty reliable.
Once I hit third year, we were allowed to use video terminals to do our Computer Science work. Some people were lucky enough to use very compact languages like APL to program. Others of us had to manage rather slow editors using cursor keys. Admittedly a huge improvement over the LA36 or punch cards.
For my first Co-op work term, I worked at Island Medical Labs and programmed a Radio Shack TRS-80 computer to do a number of calculations and to print a number of reports for the lab. This was my introduction to personal computing and I was so happy to have a computer all to myself, rather than the time sharing systems I was used to. It had disk drives and a daisy wheel printer. Lots of fun programming in Basic for this.
After my first co-op work term I used much of my earnings to buy a brand new Apple II+. Since I mostly took Numerical Analysis type CS courses, I was actually able to do quite a few labs off my Apple II+. I had to take a cassette tape downtown to get a print out at a computer store since I didn’t have a printer yet (or a disk drive).
A big innovation came when the Apple Lisa came out and introduced the world to the mouse and the GUI Operating System. This was a huge leap forward. I never owned a Lisa or Mac, but eventually started using Windows, a rather pale copy in those days.
Along the way there were quite a few devices that used touch as an input mechanism. But none of them were popular until the iPhone came along. Like GUIs and the mouse, Apple brought this into the mainstream. I have an iPhone 4s and really love it. Using this device is very easy and once used to it, I don’t miss the keyboard from my previous Blackberry at all.
Voice input is finally starting to work properly. Tools like Apple’s Siri are actually starting to be useful. I blogged on this previously here. Certainly people are relying on this in their cars to dial phone and to select music. Even to ask Siri trivia questions as you drive along.
Gesture is still fairly controversial. It’s not clear whether Kinect helps or hinders Xbox. People like the concept but are put off by an always on video camera into their living room. We aren’t quite at the level of Minority Report yet, but we are getting there. I’m not sure what this will do to the cube office environment once this goes mainstream.
Although not really an input device, Virtual Reality and VR Goggles are closely related. In these immersive worlds they combine voice and gesture input with providing an immersive complete visual view. The Oculus Rift was quite popular at CES this year. It will be interesting to see if these can successfully be productized and achieve a mass appeal.
I blogged previously on Google Glasses here. These are fairly controversial. Google is just in the process of releasing these into the mainstream market. It will be interesting to see if they are accepted. They are expensive, and wearers are commonly called glassholes. I’m not sure everyone else likes being filmed all the time, so it will be interesting to see how this evolves.
We are starting to see devices that can interpret and act on the electrical signals generated from the brain. Right now it takes a fair bit of concentration and training to use these, but as these get more refined, how long before we can practically control our computers via thinking? How long before we have a USB port embedding into our neck where we can read USB sticks directly?
We’ve come a long way from punched cards to Google Glasses. We’ve adapted input devices from all sorts of innovative techniques from keyboards to mice to touch to voice to gestures and R&D into new techniques is progressing at a breakneck pace. It will be really amazing what comes out over the next few years. Which experimental technologies go mainstream, which mainstream technologies die out?