Stephen Smith's Blog

All things Sage ERP…

Archive for the ‘Business’ Category

Sage Summit 2014

with 3 comments

Introduction

I’m just back from Sage Summit 2014 which was held at the Mandalay Bay Resort/Hotel in Las Vegas, Nevada. There were over 5200 attendees at the show, a new record for Sage. The Mandalay Bay is a huge complex and I racked up a record number of steps for GCC getting from one place to another. Las Vegas is easy to get to for most people since there are a lot of direct flights from around North America and you can find really cheap hotel accommodation near to the conference (like $29 at the Excalibur which is connected to Mandalay Bay by a free tram). The only down side to having he conference in Vegas is that smoking is still allowed in many public places, which is really annoying.

The conference had a great many guest speakers including quite a few celebrities like Magic Johnson and Jessica Alba. The convention trade show wasn’t just booths, there were also open speaking theatres that always had something interesting going on as well as the Sage Innovation Lab Exhibit.

There were a great many product breakout sessions as well as a large number of breakout sessions on general business and technology topics. The intent was to make Sage Summit a place to come to for a lot more than just learning new technical details about your Sage product, or promoting new add-ons for you to purchase. A lot of customers attending the show told me that it was these general sessions on accounting, marketing and technology that they found the most useful.

The show was huge and this blog post just covers a few areas that I was directly involved in or attended.

Great General Sessions

Besides the mandatory Sage keynotes, there were quite a few general sessions which were quite amazing. My favorite was Brad Smith’s interview with Biz Stone the co-founder of Twitter and Jelly. Biz certainly provides a lot of interesting context to Web startups, as well as a lot of interesting stories of why he left Google and chose the path he chose. It was certainly interesting in the way a lot of the successful founders left very secure lucrative careers to really struggle for years to get their dreams off the ground. A common theme was the need for persistence so you could survive long enough to eventually get that big break. Another common theme was to follow people and ideas rather than companies and money. Now I’m going to have to read Biz’s book: “Things a Little Bird Told Me: Confessions of the Creative Mind”.

bizstone

Another very popular session was the panel discussion with Magic Johnson, CEO of Magic Johnson Enterprises, Jessica Alba, co-founder of the Honest Company and J. Carrey Smith, CEO of Big Ass Solutions. This discussion concentrated on their current businesses and didn’t delve into their celebrity pasts for which at least two panelists are rather well known for. There were a lot of good business tips given and it was interesting to see how Magic Johnson and Jessica Alba have adapted what they did before to becoming quite successful CEOs.

magicalba

Sage’s Technology Vision

A lot of Sage’s technology and product presentations were about our mobile and cloud technology vision. The theme was to aggressively move into these areas with purposeful innovation that still protect the investment that our customers have in our current technologies. At the heart of this vision is the Sage Data Cloud. This acts as a hub which mobile solutions can connect to as well as a way that data can be accessed in our existing products whether in the cloud or installed on premise. Below is the architectural block diagram showing the main components of this.

sagetech2

This is perhaps a bit theoretical, but we already have products in the market that are filling in key components of this vision. Some of these are included in the next diagram.

sagetech1

We use the term “hybrid cloud” quite a bit, this indicates that you can have some of your data on premise and some of your data in the cloud. There are quite a few use cases that people desire. Not everyone is sold with trusting all their data to a cloud vendor for safe keeping. In some industries and countries there are tight regulatory controls on where your data can legally be located. The Hybrid Cloud box in the diagram includes Sage 50 ERP (US and Canadian), Sage 100 ERP and Sage 300 ERP.

To effectively operate mobile and web solutions, you do need to have your data available 24×7 with a very high degree of uptime and a very high degree of security. Most small or mid-sized business customers cannot afford sufficient IT resources to maintain this for their own data center. One solution to this problem is to synchronize a subset of your on premise ERP/CRM data to the Sage Data Cloud and then have your mobile solutions accessing this. Then it becomes Sage’s responsibility to maintain the uptime, 24×7 support and apply all the necessary security procedures to keep the data safe.

Another attraction for ISVs is integrate their product to the Sage Data Cloud and then let the Sage Data Cloud handle all the details of integrating to the many Sage ERP products. This way they only need to write one integration rather than separate integrations for Sage 50 ERP, Sage 100 ERP, Sage 300 ERP, Sage 300 CRE, etc.

We had a lot of coverage of the Sage 300 Online offering which has been live for a while now. This was introduced last Summit and now offers Sage 300 ERP customers the choice of installing on premise or running in the Azure cloud. Running in the cloud saves you having to back up your data, perform updates or maintain servers or operating systems. This way you can just run Sage 300 and let Sage handle the details. Of course you can get a copy of your data anytime you want and even move between on premise and the cloud.

The Sage Innovation Lab

On the trade show we had a special section for the Sage Innovation Lab. Here you could play with Google Glasses, Pebble Watches, 3D Printers and all sorts of neat toys to see some prototypes and experiments that Sage is working on with these. We don’t know if these will all be productized, but it’s cool to get a feel for how the future might begin to look like.

Summary

This really was Sage Summit re-imagined. There were a great many sessions, keynotes and demonstrations on all sorts of topics of interest to businesses. This should be taken even further for next year’s Sage Summit which will be in New Orleans, LA on July 27-30, 2015. Does anyone else remember all those great CA-World’s in New Orleans back in the 90s?

Written by smist08

August 2, 2014 at 8:07 pm

Apple Mobile Trends

with 2 comments

Introduction

With Apple’s WWDC conference just wrapping up, I thought it might be a good time to meditate on a few of the current trends in the mobile world. I think the patent wars are sorting themselves out as Google and Apple settle and we are seeing a lot more competitive copying. Apple added a lot of features that competitors have had for a while as well as adding a few innovations unique to Apple.

The competitive fervor being shown in both the Google and Apple mobile camps is impressive and making it very hard for any other system to keep up.

ios8_large

Cloud Storage

Apple has had the iCloud for a while now, but with this version we are really seeing Apple leverage this. When Google introduced the Chromebook they used this video to show the power of keeping things in the Web. This idea has been copied somewhat by Microsoft. But now Apple has taken this to the next level by allowing you to continue from device to device seamlessly, so you can easily start an e-mail on your phone and then continue working on it on your MacBook. No having to e-mail things to yourself, it all just seamlessly works.

Apple also copied some ideas from Google Drive and DropBox to allow copying files across non-Apple devices like Windows as well as sharing documents between applications. So now this is all a bit more seamless. It’s amazing how much free cloud storage you can get by having Google, Microsoft, Apple and Dropbox accounts.

Generally this is just the beginning as companies figure out neat things they can do when your data is in the cloud. If you are worried about privacy or the NSA reading your documents, you might try a different solution, but for many things the convenience of this outweighs the worries. Perhaps a bigger worry than the FBI or NSA is how advertisers will be allowed to use all this data to target you. Apple has added some features to really enable mobile advertising, whether these become too intrusive and annoying has yet to be seen.

Copying is the Best Compliment

Apple has also copied quite a few ideas from Google, Blackberry and Microsoft into the new iOS. There is a lot more use of transparency (like introduced in Windows Vista). There is now a customizable and predictive keyboard adding ideas from Blackberry and Microsoft. Keyboard entry has been one of Apple’s weaknesses that it is trying to address. Similarly the drive option in the iCloud is rather late to the game.

Apps versus the Web

There is a continuing battle between native applications and web applications for accessing web sites. People often complain that the native mobile application only gives them a subset of what is available on the full web site, but then on the other hand the consensus is that the native mobile apps give a much better experience.

True web applications give a unified experience across all devices and give the same functionality and the same interaction models. This is also easier for developers since you only need to develop once.

However Apple is having a lot of success with apps. Generally people seem to find things easier in the Apple App store than in browsing and bookmarking the web. Apple claims that over half of mobile Internet traffic is through iOS apps now (but I’m not sure if this is skewed by streaming video apps like Netflix that use a disproportionate amount of bandwidth).

Yet another Programming Language

Apple also unveiled a new programming language Swift for mobile development. One of the problems with C, C++ and Objective C programming is the use of pointers which tends to make programming harder than it needs to be. Java was an alternative object oriented extension to C with no pointers, then C# came along copying much of Java. Generally most scripting languages like Basic, JavaScript, Python, etc. have never had pointers.

Rather than go down the road of Java and C#, Swift has tried to incorporate the ease of use of scripting languages, but still give you full control over the iOS API. How this all works out is yet to be seen, but it will be interesting if it makes iPhones and iPads really easy to program similar to the early PCs back in the Basic days.

swift-screenshot

The Internet of Things

Apple introduced two new initiatives, their Health Kit and Home Kit. Health kit is mostly to encourage adding medical sensing devices to your iPhone, whereas Home Kit is to extend iOS into devices around the home and to control them all from your iPhone.

The Health Kit is designed to centralize all your health related information in one central place. There is getting to be quite a catalog of sensors and apps to continuously track your location, speed, heart rate, pulse, blood pressure, etc. If you are an athlete, this is great information on your fitness level and how you are doing. Garmin really pioneered this with their GPS watches with attached heart rate monitors. I have a Garmin watch and it provides a tremendous amount of information when I run or cycle. I don’t think this is much use for the iPhone, which I always leave behind since I don’t want to risk it getting wet, but this might really take off if Apple really releases a smart watch this fall like all the rumors say.

Home Kit is a bit of reaction to Google buying Nest, the intelligent thermostat. Basically you can control all your household items from your phone, so you can warm up the house as you are driving home, or turn all the lights on and off remotely. We have a cottage with in-floor heating, it would be nice if we could remotely tell the house to start heating up in the winter a few hours before we arrive, right now it’s a bit cold when we first get there and turn on the heat. However with zoned heating we would need four thermostats and at $250 each, this is rather excessively expensive. I think the price of these devices has to come down quite a bit to create some real adoption.

There is a lot of concern about having all of these hacked and interfered with, but if they get the security and privacy correct, then these are really handy things to have.

Summary

Apple has introduced some quite intriguing new directions. Can Swift become the Basic programming languages for mobile devices? Will Health Kit and Home Kit usher in a wave of new wonderful intelligent devices? Will all the new refinements in iOS really help users have an even better mobile experience? Will native apps continue to displace web sites, or will web sites re-emerge as the dominant on-line experience? Lots of questions to be answered over the next few months, but it should be fun playing with tall these new toys.

Written by smist08

June 7, 2014 at 4:27 pm

Some Thoughts on Security

with 2 comments

Introduction

With the recent Heartbleed security exploit in the OpenSSL library a lot of attention has been focused on how vulnerable our computer systems have become to data theft. With so much data travelling the Internet as well as travelling wireless networks, this has brought home the importance of how secure these systems are. With a general direction towards an Internet of Things this makes all our devices whether our fridge or our car possibly susceptible to hackers.

I’ll talk about Heartbleed a bit later, but first perhaps a bit of history with my experiences with secure computing environments.

Physical Isolation

My last co-op work term was at DRDC Atlantic in Dartmouth, Nova Scotia. In order to maintain security they had a special mainframe for handling classified data and to perform classified processing. This computer was located inside a bank vault along with all its disk drives and tape units. It was only turned on after the door was sealed and it was completely cut off from the outside world. Technicians were responsible for monitoring the vault from the outside to ensure that there was absolutely no leakage of RF radiation when classified processing was in progress.

After graduation from University my first job was with Epic Data. One of the projects I worked on was a security system for a General Dynamics fighter aircraft design facility. This entire building was built as a giant Faraday cage. The entrances weren’t sealed, but you had to travel through a twisty corridor to enter the building to ensure there was not line for radio waves to pass out. Then surrounding the building was a large protected parking lot where only authorized cars were allowed in.

Generally these facilities didn’t believe you could secure connections with the outside world. If such a connection existed, no matter how good the encryption and security measures, a hacker could penetrate it. The hackers they were worried about weren’t just bored teenagers living in their parent’s basements, but well trained and financed hackers working for foreign governments. Something like the Russian or Chinese version of the NSA.

Van Eck Phreaking

A lot of attention goes to securing Internet connections. But historically data has been stolen through other means. Van Eck Phreaking is a technique to listen to the RF radiation from a CRT or LCD monitor and to reconstruct the image from that radiation. Using this sort of technique a van parked on the street with sensitive antenna equipment can reconstruct what is being viewed on your monitor. This is even though you are using a wired connection from your computer to the monitor. In this case how updated your software is or how secure your cryptography is just doesn’t matter.

Everything is Wireless

It seems that every now and then politicians forget that cell phones are really just radios and that anyone with the right sort of radio receiver can listen in. This seems to lead to a scandal in BC politics every couple of years. This is really just a reminder that unless something is specifically marked as using some sort of secure connection or cryptography, it probably doesn’t. And then if it doesn’t anyone can listen in.

It might seem that most communications are secure now a days. Even Google search switches to always use https which is a very secure encrypted channel to keep all your search terms a secret between yourself and Google.

But think about all the other communication channels going on. If you use a wireless mouse or a wireless keyboard, then these are really just short range radios. Is this communications encrypted and secure? Similarly if you use a wireless monitor, then it’s even easier to eavesdrop on than using Van Eck.

What about your Wi-Fi network? Is that secure? Or is all non-https traffic easy to eavesdrop on? People are getting better and better at hacking into Wi-Fi networks.

In your car if you are using your cell phone via blue tooth, is this another place where eavesdropping can occur?

Heartbleed

Heartbleed is an interesting bug in the OpenSSL library that’s caused a lot of concern recently. The following XKCD cartoon gives a good explanation of how a bug in validating an input parameter caused the problem of leaking a lot of data to the web.

heartbleed_explanation

At the first level, any program that receives input from untrusted sources (i.e. random people out on the Internet) should very carefully and thoroughly valid any input. Here you can tell it what to reply and the length of the reply. If you give a length much longer than what was given then it leaks whatever random contents of memory were located here.

At the second level, this is an API design flaw, that there should never have been such a function with such parameters that could be abused thus.

At the third level, what allows this to go bad is a performance optimization that was put in the OpenSSL library to provide faster buffer management. Before this performance enhancement, this bug would just have caused an application fault. This would have been bad, but been easy to detect and wouldn’t have leaked any data. At worst it would have perhaps allowed some short lived denial of service attacks.

Mostly exploiting this security hole just returns the attacker with a bunch of random garbage. The trick is to automate the attack to repeatedly try it on thousands of places until by fluke you find something valuable, perhaps a private digital key or perhaps a password.

password-heartbleed-thumb-v1-620x411

Complacency

The open source community makes the claim that open source code is safer because anyone can review the source code and find bugs. So people are invited to do this to OpenSSL. I think Heartbleed shows that security researcher became complacent and weren’t examining this code closely enough.

The code that caused the bug was checked in by a trusted coder, and was code reviewed by someone knowledgeable. Mistakes happen, but for something like this, perhaps there was a bit too much trust. I think it was an honest mistake and not deliberate sabotage by hackers or the NSA. The source code change logs give a pretty good audit of what happened and why.

Should I Panic?

In spite of what some reporters are saying, this isn’t the worst security problem that has surfaced. The holy grail of hackers is to find a way to root computers (take them over with full administrator privileges). This attack just has a small chance of providing something to help on this way and isn’t a full exploit in its own right. Bugs in Java, IE, SQL Server and Flash have all allowed hackers to take over peoples computers. Some didn’t require anything else, some just required tricking the user into browsing a bad web site. Similarly e-mail or flash drive viruses have caused far more havoc than this particular problem. Another really on-going security weakness is caused by government regulations restricting the strength of encryption or forcing the disclosure of keys, these measures do little to help the government, but they really make the lives of hackers easier. I also think that e-mail borne viruses have wreaked much more havoc than Heartbleed is likely to. But I suspect the biggest source of identity theft is from data recovered from stolen laptops and other devices.

Another aspect is the idea that we should be like gazelle’s and rely on the herd to protect us. If we are in a herd of 100 and a lion comes along to eat one of us then there is only a 1/1000 chance that it will be me.

This attack does highlight the importance of some good security practices. Such as changing important passwords regularly (every few months) and using sufficiently complex or long passwords.

All that being said, nearly every website makes you sign in. For web sites that I don’t care about I just use a simple password and if someone discovers it, I don’t really care. For other sites like personal banking I take much more care. For sites like Facebook I take medium care. Generally don’t provide accurate personal information to sites that don’t need it, if they insist on your birthday, enter it a few days off, if they want a phone number then make one up. That way if the site is compromised then they just get a bunch of inaccurate data on you. Most sites ask way too many things. Resist answering these or answer them inaccurately. Also avoid overly nosey surveys, they may be private and anonymous, unless hacked.

The good thing about this exploit, seems to be that it was discovered and fixed mostly before it could be exploited. I haven’t seen real cases of damage being done. Some sites (like the Canadian Revenue Services) are trying to blame Heartbleed for unrelated security lapses.

Generally the problems that you hear about are the ones that you don’t need to worry so much about. But again it is a safe practice to use this as a reminder to change your passwords and minimize the amount of personally identifiable data out there. After all dealing with things like identity theft can be pretty annoying. And this also help with the problems that the black hat hackers know about and are using, but haven’t been discovered yet.

Summary

You always need to be vigilant about security. However it doesn’t help to be overly paranoid. Follow good on-line practices and you should be fine. The diversity of computer systems out there helps, not all are affected and those that are, are good about notifying those that have been affected. Generally a little paranoia and good sense can go a long way on-line.

Written by smist08

April 26, 2014 at 6:51 pm

Disaster Recovery

with 3 comments

Introduction

In a previous blog article I talked about business continuity, what you need to do to keep Sage 300 ERP up and running with little or no downtown. However I mushed together two concepts, namely keeping a service highly available along with having a disaster recovery plan. In this article I want to separate these two concepts apart and consider them separately.

We’ve had to give these two concepts a lot of thought when crafting our Sage 300 Online product offering, since we want to have this service available as close to 100% as possible and then if something truly catastrophic happens, back on its feet as quickly as possible.

Terminology

There is some common terminology which you always see in discussions on this topic:

RPO – Recovery Point Objective: this is the maximum tolerable period in which data might be lost due to a major incident. So for instance if you have to restore from a backup, how long ago was that backup made.

RTO – Recovery Time Objective: this is the duration of time within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences. For instance if a computer fails, how long can you wait to replace it.

HA – High Availability: usually concerns keeping a system running with little or no downtime. This doesn’t include scheduled downtime and it usually doesn’t include a major disaster like an earthquake eating a datacenter.

DR – Disaster Recovery: this is the process, policies and procedures that are related to preparing for recovery or continuation of technology infrastructure which are vital to an organization after a natural or human-induced disaster.

High Availability

HA means creating a system that can keep running when individual components fail (no single point of failure), like one computer’s motherboard frying, a power supply failing or a hard disk failure. These are reasonably rare events, but often systems in data centers run on dozens of individual computers and things do fail and you don’t want to be down for a day waiting for a new part to be delivered.

Of course if you don’t mind being down for a day or two when things fail, then there is no point spending the money to protect against this. Which is why most businesses set RPO and RTO targets for these type of things.

Some of this comes down to procedures as well. For instance if you have all redundant components but then run Windows Update on them all at once, they will reboot all at once bringing your system down. You could schedule a maintenance windows for this, but generally if you have redundant components you can Windows update the first and when its fine and back up, then you do the secondary.

If you are running Sage ERP on a newer Windows Server and using SQL Server as your database then there are really good hardware/software combinations of all the standard components to give you really good solid high availability. I talked about some of these in this article.

Disaster Recovery

This usually refers to having a tested plan to spin up your IT infrastructure at an alternate site in the case of a major disaster like an earthquake or hurricane wiping out you currently running systems.

natural_disaster

Again depending on your RPO/RTO requirements will depend on how much money you spend on this. For instance do you purchase backup hardware and have it ready to go in an alternate geographic region (far enough away that the same disaster couldn’t take out both locations)?

For sure you need to have complete backups of everything that are stored far away that you can recover from. Then it’s a matter of acquiring the hardware and restoring all your backups. Often people are storing these backups in the cloud these days, this is because cloud storage has become quite inexpensive and most cloud storage solutions provide redundancy across multiple geographies.

The key point here is to test your procedure. If your DR plan isn’t tested then chances are it won’t work when it’s needed. Performing a DR drill is quite time consuming, but really essential if you are serious about business continuity.

Azure

One of the attractions of the cloud is having a lot of these things done for you. Sage 300 Online handles setting up all its systems HA, as well as having a tested DR plan ready to implement. Azure helps by having many data centers in different locations and then having a lot of HA and DR features built into their components (especially the PaaS ones). This then removes a lot of management and procedural headaches from running your business.

Hard Decisions

If a data center is completely wiped out, then the decision to execute the DR plan is easy. However the harder decision comes in when the primary site has been down for a few hours, people are working hard to restore service, but it seems to be dragging on. Then you can have a hard decision to kick in the DR plan or to wait to see if the people recovering the primary can be successful. These sort of things are often caused by electrical problems, or problems with large SANs.

One option is to start spinning up the alternative site, restoring backups if necessary and getting ready, so when you do make the decision you can do the switch over quickly. This way you can often delay the hard decision and give the people fixing the problem a bit more time.

Dangers

Having a good tested DR plan is the first step, but businesses need to realize that if a major disaster like an earthquake wiping out a lot of data centers, then many people are going to activate their DR plans at once. This scenario won’t have been tested. We could easily experience a cascading outage from the high usage that causes many other sites to go down, until the initial wave passes. Generally businesses have to be prepared to not receive good service until everyone is moved over and things settle down again.

Summary

Responsible companies should have solid plans for both high availability and disaster recovery. At the same time they need to compare the cost of these against the time they can afford to be down against the probability of these scenarios happening to them. Due to the costs and complexities of handling these scenarios, many companies are moving to the cloud to offload these concerns to their cloud application provider. Of course when choosing a cloud provider make sure you check the RPO and RTO that they provide.

Written by smist08

April 12, 2014 at 5:56 pm

On Accelerating Projects

with one comment

Introduction

Usually projects start with a POC (Proof of Concept) and then move on to a real project using some subset of existing resources and then if the project looks exciting, senior management will want it done faster and is willing to throw more resources at it.

In this article I’m looking at some of the problems this causes and some of the solutions to the problem.

Just Ship the POC

In the interest of Internet time, many companies are following the Lean Startup philosophy of shipping a MVP (Minimum Viable Product) early and then quickly iterating on this based on feedback from users. This has worked well for a number of companies, but it has also failed miserably as well. The big mistake is usually the Viable part of MVP and that the work to make the product viable is greatly underestimated.

This approach often leads to products shipped too early which receive a black eye or lead to very major re—writes (often called refactoring) that amount to starting over from scratch. Often this just starts you on a much bigger project that you need to ramp up for. This approach also tends to do away with a project management framework which then leads to various project dependencies blowing up into major problems.

Brooks’ Law

Brooks’ law is a principle in software development which says that “adding manpower to a late software project makes it later.” This is from Brook’s book: The Mythical Man-Month. The corollary of Brooks’ Law is that there is an incremental person who, when added to a project, makes it take more, not less time. And generally people that have worked on large projects know who that person is.

dilbert

Some of the causes of Brooks’ Law are:

  • Training Time. It takes time to bring people up to speed. Even if they are experts in the technology you are using, they still need to learn your product and understand the work that went before they joined the team. This also diverts productive resources from the project to conduct this training and then extra supervision and oversight are required to ensure they are up to speed.
  • Communications Overhead. As the number of people increases the number of possible communication channels increases dramatically. Keeping people in sync and moving in the same direction becomes a real management challenge.

interconnections

Another common comparison is that the way we put resources on a project is like hiring nine women and demanding that they produce one baby in one month.

dilbert2814760070903

Remedies

This might make it look like completing a large project on time or accelerating a large project are impossible. This conclusion is usually due to a number of misinterpretations of Brooks’ Law and to a number of older bad management practices. But ramping up a large project, although not impossible is still highly challenging.

First Brooks’ Law applies to projects that are already late. Ramping up the project earlier certainly helps. Then there is less code already developed to learn and we don’t take away resources to babysit and to train people later in the project. Having real schedules also helps, just having a schedule with no basis in reality is a problem with the schedule not with the project.

The quantity, quality and roles of the people added to the project is a major factor as well. One approach is to over hire for a project to compensate for the time taken for training and communications overhead. Adding high quality people and removing underperformers early can really help as well. Adding some roles like QA and documentation later in the project is not so harmful.

Modern development practices and supporting technologies can really help as well. For instance continuous integration, agile development and test driven development all help greatly by keeping projects in a complete state.

Good design patterns simplify the distribution of work, since then people know where they fit into the design framework and can do their part independently. Similarly if the project is well segregated so that small groups can work on independent parts of the project to greatly decrease the communications overhead.

Another aspect is the social and political climate of a company can have a big effect. Often small companies or projects can rely on a hero to carry the project to success. This model doesn’t scale to large projects, where lots of people doing repeatable work is required. Once projects get to a certain scale, all the work needs to be de-centralized and all the work needs to be delegated. Tightly controlled organizations with a lot of control freaks are going to have problems once they scale past a certain point.

A good Project Management framework within a company really helps as well. Things have progressed quite a bit since Gantt charts and if they have sufficient tools for good tracking and the power to organize dependencies, this can help greatly.

Things to Watch Out For

As you scale your project here are a few warning signs.

  • Different groups doing the work differently. This indicates that the design patterns aren’t laid down properly or that people aren’t following them. This will lead to major problems quickly.
  • Training time and individual ramp up time not being factored into the schedule. Schedules that are made around resources that are assume to be fully productive are going to slip. As people are added to a project, people need training, acclimatization and ramp up time. The schedule also has to allow for the time of trainers, mentors and supervisors of the new people.
  • Watch out for high people turn over. If people are leaving the project or company in higher than normal problems, it usually means a project isn’t being run properly and people are unhappy working on it. Further replacing these people is expensive in recruiting, training and individual ramp up.
  • Miscommunications. Often if a lot of communications problem are happening it means people are avoiding the communications overhead problem by not communicating and this causes lots of problems on its own. Managers and team leaders need to stay on top of this and make sure the necessary communication happens. There are a lot of new good tools for this like Wiki’s and group virtual meetings that can help.
  • Watch out for bottlenecks. Often when you ramp up a project, one group might have been neglected and starts to be a bottleneck. Eliminating bottlenecks has to be a key role of the Project Management Organization (PMO). Especially watch out for self-imposed bottlenecks where people are taking on too much and refusing to delegate or distribute work. These sorts of practices just don’t scale.
  • Watch out for project dependencies. The more dependencies in a project, the harder it is to manage. If the project isn’t designed to scale out independently, it isn’t going to work. Lots of dependencies between groups is a real red flag and a huge indicator that the project needs to be restructured.

Summary

Many companies want to take on ambitious large scale projects. With the current demands to do things in Internet time, there are a lot pressures to really accelerate projects. No one wants a train wreck like in the picture below, but we do have to use past experience from these to guide current projects away from the obstacles and to keep them on track.

The-Trainwreck-A-Real-Train-Wreck

Written by smist08

March 29, 2014 at 6:08 pm

On Retaining Employees

with 3 comments

Introduction

In a few previous blog posts I’ve been talking about attracting new employees whether through office design, advice for someone starting their career or corporate mobility. In this article I’ll be looking at some ideas on how to keep existing employees. Generally the value of a high tech company largely depends on the IP contained in the heads of the employees and growth prospects depend on their ability to execute.

High Costs of Hiring and Training New People

Hiring new employees is quite time consuming and a slow process. Especially in todays job market which is very hot with all the venture capital that is freely flowing right now. Is this a bubble that will shortly burst? Either way hiring is fairly slow right now. Then any new employee has to take quite a bit of time to learn your ways of doing things and to become familiar with your existing programs and systems.

On the converse new employees do being new ideas, new experiences and new perspectives that greatly help an organization. Having a stream of new employees is very beneficial, but when it becomes a torrent then things get tricky.

Motivations

To retain employees, it isn’t just a matter of higher salaries (though that works well for me), but understanding people’s motivations which may not be intuitive. A good video on people’s motivations is this one. Motivations are really quite complex and much more is involved than just money. This video’s thesis is that you need to pay enough money to take money off the table as an issue, then the priorities become:

Autonomy: people want to be self-directed, they want control over what they do. This is one of the reasons that unstructured time is so successful at so many organizations.

Mastery: people want to have mastery at what they are doing. They need time to learn and practice what they are doing in order to raise their work to a higher level. Often in technical organizations, this is why frequently moving people between projects causes so much dissonance. People aren’t just cogs that do repetitive work that are all interchangeable. This is often confused with resistance to change which is something quite different.

Purpose: People want to make a contribution. They want to see their work being used by happy customers. They want to see their work making other people’s lives better. Putting out poor quality products that annoy people will cause employees to want to leave an organization. Having corporate policies that violate customer’s privacy or do other semi-legal immoral corporate activities will disengage the workforce.

If a company pays a competitive salary then these items will be very important in engaging and retaining employees. But there are still other factors.

Golden Handcuffs

One of my favorite ways to be retained by an employer are golden handcuffs. These are benefits like stock options or future bonuses that you have to remain an employee to collect. Often these can become quite valuable making it a very difficult decision to leave. For instance stock options vest over five years and you can retain them for ten. If your company is growing and its stock is going up then these can become very valuable and walking away from them is as difficult as getting out of handcuffs. Even if you company isn’t public, having these in the hope of going public is a great retention tactic.

golden-handcuffs

Challenging Work

Technical employees like programmers value challenging work where they get to use newer technologies. This keeps people interested via continuous learning and people feel secure in their profession since they know their skills are up to date.

challenges

A lot of times technical people leave an organization because they feel their skills are getting dated and that it’s hard to learn and practice newer practices.

Co-Workers

When performing employee surveys, often the key answers given to the question of why people stay is that they like their co-workers and/or they like their boss. To some degree this comes down to having a very positive work environment. Ensuring everyone treats everyone else with respect and that bad behavior to other people isn’t tolerated.

Another key aspect is when hiring to consider how people will fit in to the current teams and often to give team members a chance to participate in the job interview process to give their input on this.

Probably the most important relationship is between an employee and his boss and this means that ensuring managers are properly trained and that you have good managers is extremely important.

Communications

Having good vertical communications in an organization is critical. A lot of times when people are having problems or not fitting in, they are saying so, just no one is listening. Many times people leave due to misunderstandings or frustrations that they aren’t being heard. Having good clear communications channels is crucial.

Also an organization needs to ensure that all the employees know what the corporate priorities are and also what is the reasoning behind these. People won’t be engaged if they don’t understand why a company is doing something and in fact will often act against it.

Another good practice is to have good coaching and mentoring programs within the organization. These can really help with communications and employee development.

Don’t Reward the Bad

On the converse, you don’t want to retain people at any cost. If people aren’t performing, aren’t engaged or exhibit bad behavior, don’t reward them. Often company’s give out bonus’s anyway because they are worried about losing the employee. But I think in some cases it’s better for everyone if the employee finds a different opportunity. You especially don’t want to do this year after year or people just won’t have confidence in your rewards system.

Summary

Retaining employees doesn’t have to be hard. Generally employees are motivated by things that are also good for the company like pursuing innovation, pursuing learning and staying up to date. Generally a healthy happy workforce is also a productive workforce, so many of these items are in everyone’s interest. When companies lose sight of this, they get themselves into trouble.

 

Written by smist08

March 22, 2014 at 4:15 pm

The Umbrella Ceiling

with 3 comments

Introduction

My wife, Cathalynn, and I were recently discussing issues with people moving to other cities to pursue their careers and the hard decisions that were involved in doing this. My nephew, Ian Smith, is just starting his career and when choosing where to work has to consider what it takes to grow in the role he eventually accepts. When I started at Computer Associates, if you wanted to move up in the organization past a certain point, then you had to move to the company headquarters in New York. Similarly, when Cathalynn was working at Motorola, the upwardly mobile had to relocate to Schaumburg, Illinois.

From Cathalynn Labonté-Smith

Recently, Vancouver hosted a Heritage Classic hockey game at BC Place as have many cities across Canada. An outdoor rink facsimile was made inside an indoor venue to recreate a 1915 game complete with original uniforms and “snow”. The plan was to retract the ceiling on the dome but a torrential downpour kept the giant umbrella deployed. Despite the nostalgia of the game the Vancouver Canucks and Ottawa Senators were playing for real—this game counted for NHL points, so the integrity of the ice had to be maintained.

bcplace1000_120513

We’ve all heard of the glass ceiling. Indeed, yesterday (March 8th) it was International Women’s Day—a day to reflect on all aspects of women’s’ equality and well-being. In the corporate world, how are we doing? According to Catalyst only 4.6% of Fortune 1000 companies have women CEOs (http://www.catalyst.org/knowledge/women-ceos-fortune-1000).

We’ve all heard of hitting the glass ceiling, however; living on the West Coast working in the high technology sector we have what I call an umbrella ceiling that applies to both genders. Umbrella in the down or sun position–you are blessed with a lifestyle that promotes health and well-being with a year-round outdoor playground and cultural diversity. Umbrella in the up or rain position—you are blocked from moving on to a top job within any corporation that has a head office outside of British Columbia you have to leave. We’ve been to many a tearful going away party. But then if you stay as the Smiths have where are roots and family are, you many spend your weekends hiking, snowboarding, cycling, gardening, wine-tasting, cross-border shopping to Seattle and in many other wonderful pursuits, so that’s cool too.

Does it have to continue to be this way? With all the technology like Skype, other teleconferencing software, cloud applications, mobile phones, portals, access to travel and other collaborative tools that are available why do corporations still tend to centralize top officers in one location? Or, can companies truly embrace the mobile workforce including more females at the CEO level. Are they missing out on or losing top talent for this-is-the-way-we’ve-always-done-itism?

I’m turning this over to the expert, Mr. Steve himself. Cat out.

umbrellaceiling

Physical versus Virtual Offices

A lot of discussion comes down to how important is face-to-face interaction. How much can be done virtually via Skype, e-mail, telepresence, chat and other collaborative technologies?

My own experience is that there are a lot of communication problems that can easily be cleared up face-to-face. Often without direct interaction, misunderstandings multiply and don’t get resolved. Probably the worst for this is e-mail. Generally, programmers don’t like to talk on the phone and so will persist with e-mail threads that lead nowhere for far too long rather than just picking up the phone and resolving the issue.

But with video calls so routine can much be handled this way instead and physical meetings kept to a minimum? Another thing that limits interactions is living in different time zones and how much time you have to interact. For example, I have days bookended by early morning and late evening conference calls.

Generally, office design has improved over the years as well to better facilitate team work and collaboration. If you aren’t in this environment are you as productive as the people that are?

Tim Bray leaves Google to stay in Vancouver

A recent high profile case of this was Tim Bray who worked at Google but lives in Vancouver. He gave a quick synopsis on his blog here. Google has a reputation as a modern web cloud company, and yet here is a case where having someone physically present is the most important qualification for the job. If Google can’t solve this problem, does anyone else have a chance?

Though personally it seems that Tim accepted the position at Google with the assumption of moving to California, so it seems a bit passive aggressive, then staying in Vancouver and just pretending he would move.

Mobility of CEOs

The ultimate metric of all this is how mobile is the CEO of a company. Does the CEO have to physically be present in the corporate headquarters for a significant percentage of their time? Does the CEO have to have a residence in the same city as the corporate headquarters? Is even the idea of a physical corporate headquarters relevant anymore in today’s world?

Many top executives spend an awful lot of their time on airplanes and in hotels. To some degree does it really matter where they live? After for modern global companies often to have the necessary face to face time with all the right people can’t be done from the corner office. Is the life of an executive similar to the life of George Clooney in Up in the Air?

I think if the CEO is in a fixed location then the upwardly mobile are going to be attracted to that location like moths to a flame. I think there is a strong fear in people of being out of the loop and for executives this can be quite career limiting.

Summary

I tend to think that face-to-face interaction and working together physically as a team has a lot of merit. Just breaking down the barriers to communications in this sort of tight knit environment can still be challenging.

I find that working remotely works very well for some people. But these people have to be strongly self-motivated and have to be able to work without nearly as much direct supervision or oversight.

I’m finding that the tools for communicating remotely are getting better and better and that this does then allow more people to work remotely, but at this point anyway, we can’t go 100% down this road.

If you have any thoughts on this, leave a comment at the end of the article.

umbrellaceiling2

Follow

Get every new post delivered to your Inbox.

Join 249 other followers