Stephen Smith's Blog

Musings on Machine Learning…

Archive for October 2012

Sage Vision 2012

with one comment

Cambodia

The Sage Vision 2012 conference was held in Siem Reap, Cambodia this year. This conference is for Sage business partners located in Asia. Siem Reap is an UNESCO heritage site for all the various Hindu and Buddhist temples from the Khmer Empire in the 9th to 15th centuries, including the ancient wonder of the world: Angkor Wat.

Modern Cambodia is a terrific place to visit with very friendly people and some really impressive things to see. It’s amazing that Cambodia has progressed as much as it has, since it was really only re-formed as a kingdom in 1993 after suffering so many years of the Vietnam war, Pol Pot and various civil wars. Visiting the war museum was really an eye opening experience to see the suffering that the people endured.

Conference

The majority of our partners in the region are from Singapore, Thailand and Malaysia. But many other countries were represented. Right now we are actually seeing the most growth from Indonesia. This conference focused on the Sage 300 ERP, Sage CRM and Sage ERP X3 products. There were intensive training tracks on each of these. Then there were also general sessions on various aspects of all these products.

The conference started with a keynote address that was introduced by Gee Sing Low, Sage’s General Manager for the regions, followed by Keith Fenner for Sage X3, then Lorcan Malone for Sage CRM then myself for Sage 300 ERP, and then Charles Cheng showing off our revamped partner portal web site. This was then followed by the trade show for ISVs and then a cocktail reception by the pool.

There were then two days of sessions and then wrapped up on the last night with the partner awards gala dinner.

The keynote went well; it was nice that we could weave a story where we cover the ERP market from the lower to higher midmarket with Sage 300 and X3. How we have a number of shared components tying things together like Visual Process Flows, shared connected services as well as linking to a common front office solution with Sage CRM.

ISVs

About a dozen ISVs were at the conference. They had sessions on their products as well as booths in the trade show area. Some of the ISVs present including: Pacific Technology SolutionsTechnisoft, Norming, AutoSimply, Peresoft, XM Development, AcDev and iCube.

Sage provides the core ERP solution and then ISVs provide solutions for various vertical industries. To a large degree an ERP package is a development platform first, and a business application second. We provide the core applications and then ISVs provide specialized solutions for various vertical markets. Generally the success of an ERP package depends both on the strength of the core solution combined with the strength of the ISV community.

Community

I’ve attended many Sage 300/Accpac conferences over the years starting with the series of CA-Worlds in New Orleans. I’ve now attended many South African, Asian, North American and Australian Sage conferences. I’m always struck by the sense of community among all the Business Partners. Many were involved in the original Accpac Plus days. Many have been fierce business rivals for over twenty years, but it always seems like a family reunion at these conferences. Generally a lot of strong friendships and tight bonds have developed over possibly thirty years of working in the Sage 300/Accpac community.

Internet Connectedness

I gave a demonstration of our forthcoming Service Billing connected service running on my iPhone during the keynote presentation. Currently this is an HTML/JavaScript application written with the Argos SDK. At present this has no offline ability, so must be demonstrated while on-line communicating back to a server hosted in Microsoft’s Azure US West Coast data center. In Cambodia, I could rarely get data connectivity with the provider that Roger’s partners with over there, so I had to use the hotel Wi-Fi for the demo. Generally this is a no no, especially at technology conferences where all the attendees are connected to the Wi-Fi as well.

To me, it’s amazing that this demo actually went quite well. It really demonstrated that we do live in a very connected world. I was easily entering service tickets into my ERP while on the road ten thousand miles away in Cambodia. If things work here, I tend to think they really should work anywhere.

Summary

Sage Vision 2012 was another fun and informative conference. I have lots of feedback to bring back home. I visited many interesting sites around Siem Reap. I accumulated many air miles. I find all these conferences are always a worthwhile endeavor.

Advertisements

Our First Hackathon

with 2 comments

Introduction

Hackathons are becoming a fairly common method to stimulate innovation at companies, software or otherwise. We recently held our first Hackathon here with the Sage 300 ERP development team. We are adding Hackathons to our Sage Innovation Process as an idea generator and a concept tester.

Probably the most famous recent Hackathons are those held at Facebook which resulted in the Like button, Facebook Chat and the Timeline feature. If you Google Facebook Hackatons on YouTube, you can find all sorts of videos showing these. In fact Hackathons are being conducted in industries outside of software development in things like government and food production.

The key goal of Hackathons is to stimulate the creative juices in the organization. To get ideas flowing, to provide a platform to quickly develop them, to show them off and then possibly productize them. In some sense you would like to have everyone creative and innovative all the time, but the pressures of day to day tasks usually damper such things.

Hackathons also give programmers a chance to do projects they’ve always wanted to do and felt were important, but couldn’t convince Product Management to prioritize high enough to get done.

Logistics

We decided to have a two day Hackathon. We had a kick-off meeting just before lunch on Wednesday and then the teams had two days to hack. We then had a results presentation just after lunch on Friday. Some people formed small teams, others worked solo. Basically the two days were up to them. We then provided snacks and lunch on Thursday.

Facebook runs their Hackathons for 24 hours and people don’t sleep. We thought that too extreme. Although some people worked quite long hours getting their hacks to work, no one missed a night’s sleep. Our feeling is that sleep deprivation doesn’t help and is in fact quite destructive. Often a good night’s sleep is what you need to solve difficult problems.

Idea Generation

When we were initially planning the Hackathon, we were worried that people wouldn’t participate because they would have trouble coming up with ideas of what to do. So to try to alleviate this, we came up with a list of suggestions for people.

What we found instead was that this wasn’t a problem at all. We had really good participation and none of our original ideas were used. All teams either had a brain storming session to start with, or had their own ideas that they had been thinking about and just needed an opportunity to explore them.

One key is to give plenty of warning of an upcoming Hackathon, so people can have plenty of time to come up with ideas, and to network with their peers to develop teams.

Results

The results of the Hackathon greatly exceeded our expectations. All the teams, except for one that had to deal with an emergency issue, were able to demonstrate useful and exciting results.

The projects were very diverse including:

  • testing out a new automated test tool
  • evaluating running static code analysis tools on our code
  • developing a new customer information connected service
  • created a better tool to create knowledge base articles
  • created a direct to customer advertising feature
  • added a key CRM integration feature
  • adding Skype and Google Maps integration
  •  fixing some long standing annoyances that never made the priority list.

Below is a picture of the winning team, that hacked in a number of useful social media integrations to the Sage 300 ERP product, including a rating system for things like customers (similar to Amazon ratings), integrations to Skype, Google Maps and a number of other things.

Write Up

We insisted that each group write up their results. We wanted to document all learning’s. We want to know things tried that didn’t work out as well as successes. We did this via a page on our internal development Wiki. This is a very important part of Hackathons since you want to build on everything accomplished.

Learning’s

Our summary presentations went a bit long because everyone was so excited to show so much; however, we decided that next time we will limit each presentation to 5 minutes, since the whole presentation went quite long.

The idea of giving awards turned out to be quite controversial. We had a best project award and a better luck next time award. Everyone felt we should get rid of the better luck next time award. Some people like the idea of having a “winner”, others felt that it corrupted the hackathon process by motivating people to produce visual fluff over perhaps more technical work.

The idea of the “better luck necessary” award was to celebrate failure, since we want to motivate people to take risks and not be too conservative. However people seemed to think this wasn’t a great idea since a couple of “failures” were actually considered successes since they provided proof that a couple of popular technologies weren’t really ready for prime time.

The two day time frame seemed to work quite well for our staff. No one wanted to switch to the 24 hour no sleep method. Then the consensus was that we should try to repeat the Hackathon every 2 to 3 months. It makes no sense to only do a Hackathon once, you really need to keep doing them regularly to exercise your innovation muscles or they will just atrophy again.

Summary

Hackathons are a great way to stimulate innovation in an organization. Not only do you generate a lot of ideas, but you often pick off some low hanging fruit. Or you have a POC (proof of concept) to prove out an idea to productize. Hackathons have been used successfully at many companies in many industries and our own experience was very positive.

Written by smist08

October 20, 2012 at 10:44 am

The Road to DevOps Part 2

leave a comment »

Introduction

Last week we looked at an introduction to DevOps and concentrated on the issues around frequently deploying new versions of the software. This week we are continuing to look at DevOps but concentrating on issues with maintaining and monitoring the system during normal operations. This includes ensuring the system is available, provisioning new users, removing delinquent users and generally monitoring the system and ensuring it is healthy.

SLAs

Generally everyone wants a service that is always available always healthy and working well. But this is too vague a statement and the reality is that things happen and need to be dealt with. This has to be acknowledged up front and strategies put into place to deal with them. First you need stronger guidelines and this usually starts with a Service Level Agreement (SLA) that is laid out for your customers. This details various metrics that you are promising to achieve and what happens when you don’t. Generally you need a good set of performance metrics to judge your service against.

Some of the common performance metrics are:

  • Throughput: System response speed.
  • Response Time: How quickly will a given issue be resolved?
  • Reliability: System availability.
  • Load balancing: When elasticity kicks in.
  • Time Outages: Will services be unavailable during that time?
  • Service Slow down: Will services be available, but with much lower throughput?
  • Durability: How likely to lose data.
  • Elasticity: How much a resource can grow?
  • Linearity: System performance as the load increases.
  • Agility: How quickly the we responds to load changes.
  • Automation: Percent of requests handled without human interaction.

Even if you don’t publish these metrics externally you need to track these to know how you are doing.  Generally a DevOps team takes the approach of continual improvement (like Kaizan). A good DevOps team has dashboards that track these metrics and are always looking for ways to improve them.

Monitoring

A basic rule with cloud applications is that you need to instrument and monitor everything. First this allows you to generate your SLA dashboard and ensure you are meeting your SLA. Second this lets you provide feedback back into development on what is working well and what is working badly. For instance you can track how much a given feature is being used, or perhaps how many people start using a feature, but don’t complete the operation. This could highlight a usability problem that needs to be addressed.

Similarly for performance optimizations. You don’t want to bother optimizing something that is infrequently used. But form good monitoring, for instance you can see a query that is being run very frequently and not delivering good performance. Attacking this would be helpful, both for people issuing the query and for other people perhaps slowed down while these slower queries are being processed.

The key point being to address what really matters, based on hard facts gathered by good instrumentation on what is really affecting your users. Perhaps you don’t need a monitoring center like the one below, but it sure would be cool.

Provisioning

Another operation that hopefully is going on at a rapid pace is provisioning new users. And then the reverse, hopefully at a very slow pace, is removing users. In normal operations, users should be able to sign up for your service very easily, perhaps filling out a web page, and then acknowledging a confirmation e-mail. All this should be done pretty much instantly.

What should not happen, is that the user fills out a web form, which is then submitted to a queue to be processed, then in the data center, someone reads this request and performs a number of manual steps to setup the user. Then hours or days later an e-mail is sent to the customer letting them know they can use the service.

This is really a matter of a DevOps team’s focus on automation. Chances are the steps of the manual process need to be performed, which is ok, as long as they can be automated (scripted) and performed automatically quickly eliminating any time dely. Generally any DevOps team is always looking to find any manual process and eliminate (automate) it.

Elastic Operations

Most people don’t buy their own data centers or their own server hardware anymore. Especially when starting up, you don’t want to make a huge investment in capital equipment. Most people use an IaaS or PaaS service like Amazon or Azure. These services are “elastic” in that you can run scripts to add capacity or remove capacity so then stretch and shrink with demand. You pay for what you use, so for each server you have running in this environment, you are paying some fee.

Generally a DevOps team should be monitoring the system load and when it hits a certain level, scripts are run that create a new server and add it to the system, so the load is shared by more computing resources. By the same token when usage drops, perhaps on the weekend or late at night, you would like to drop some of these computing resources to save money. Again the DevOps team needs to develop the necessary programs and scripts to support this sort of operation in an automatic manner (you don’t want to be paying someone to juggle these system resources). Adding resources is usually easier since they come up empty and once they are known to the load balancer they will start being used. When shutting down, you have to monitor and ensure no one is using the system before shutting it down (or have a method to move the active users to another server). Often this is done by stopping new requests going to the server and then just waiting for all the users to logoff or become inactive. Generally if your application in completely stateless, then this is all much easier.

Disaster Recovery

It is also the responsibility of the DevOps team to ensure there is a good disaster recovery plan in place. For instance the Azure and Amazon services have multiple datacenters. You need to control how you application is deployed to these and how backups and redundancy is managed. Generally the higher level of redundancy and the quicker the switch over can cost more money, so you need to make sure your plan is sufficient, but not overdone.

Suppose you use Azure or Amazon and even though you have a redundant deployment to multiple datacenters, the whole service goes down? Some companies actually have redundancy across IaaS providers so if Azure goes down, then they can still run on Amazon. Practically speaking, unless you are very large or have a very tight SLA, this tends to be overkill. Generally you just put in your SLA that you aren’t responsible for the provider being systemically down.

Summary

The transitions from Waterfall to Agile development was an interesting one with a lot of pitfalls along the way. The transition from Agile development to DevOps is a bigger steps and will involve many new learning opportunities. It takes a bit of patience, but in the end should lead to an improved Development organization and happier customers.

 

The Road to DevOps Part 1

with 5 comments

Introduction

Historically Development and Operations have been two separate departments that rarely talk to each other. Development produces a product using some sort of development methodology and when they’ve finished it, they release it and hand it off to Operations. Operations then installs the product, upgrades the users from the old version and then maintains the system, fixing hardware problems, logging software defect complaints and keeping backups. This worked fine when Development produced a release every 18 months or so, but doesn’t really work in the modern Web world where software can often be released hourly.

In the new world Development and Operations need to be combined into one team and one department. The goal being to remove any organizational or bureaucratic barriers between the two. It also recognizes that the goal of the company isn’t just to produce the software but to run it and to keep it available, up to date and healthy for all its customers. Operations has to provide feedback immediately to Development and Development has to quickly address any issues and provide updates quickly back to Operations.

In this posting I’m covering the aspect of DevOps concerned with frequently rolling out new versions to the cloud. In a future posting I’ll cover the aspects of DevOps concerned with the normal running of a cloud based service, like provisioning new users, monitoring the system, scaling the system and such.

Agile to DevOps

We have transitioned our software development methodology from Waterfall to Agile. A key idea of Agile is that you break work into small stories that you complete in a single short sprint (in our case two weeks). A key goal is that during the sprint each story is done completely including testing, documentation and anything else required. This then leads to the product being in a releasable state at the end of each sprint. In an ideal world you could then release (or deploy) the product at the end of each sprint.

This is certainly an improvement over Waterfall where the product is only releasable every year or 18 months or so. But generally with Agile this is the ideal, but not really what is quite achieved. Generally a release consists of the outcome of a number of sprints (usually around 10) followed by a short regression, held concurrently with a beta test followed by release.

So the question is how do we remove this extra overhead and release continuously (or much more frequently)? This is where DevOps comes in. It is a methodology that has been developed to extend Agile Development and principles straight through to actually encompassing the deployment and maintenance of the product. DevOps does require Development change the way it does things as it requires Operations to change. DevOps requires a much more team bases approach to doing things and requires organizational boundaries be removed. DevOps requires a razor focus on automating all manual processes in the build to deploy to monitor to get feedback cycle.

Development

Most development processes have an automated build system, usually built on something like Jenkins. The idea is that when you check in source code, the build system sees you checked it in, then it has rules that tell it what module it is part of, and rebuilds those modules, then it sees what modules depend on those modules and rebuilds those and so on. As part of the build process, unit tests are run and various automated tests are set off. The idea is that if anything goes wrong, it is very clear which check-in caused it and things can be quickly fixed and/or rolled back.

This is a good starting point but for effective DevOps it needs to be refined. Most modern source control systems support branching (most famously git). For DevOps it becomes even more crucial that the master branch of the product is always in releasable state and can be deployed at any time. The way this is achieved is by developing each feature as a separate branch. Then when a feature is completely ready for release it can be pulled into the master branch, which means it can be deployed at any time. Below is a diagram of how this process typically works:

Automated Testing

Obviously in this environment, it isn’t going to work if for every frequent release, you need to run a complete thorough manual test of the entire product. In an ideal world you have very complete test coverage using various levels of automated testing. I covered some aspects of this here. You still want to do some manual testing to ensure that things still feel right, but you don’t want to have to be repeating a lot of functional testing.

Operations

Operations can then take any release and in consultation with the various stakeholders release it to production. Operations is then in charge of this part of things and ensures the new versions in installed, data is converted and everything is running smoothly.

Some organizations release everything that is built which means their web site can be updated hourly. Github is a good example of this. But generally for ERP or CRM type software we want to be a bit more controlled. Generally there is a schedule of releases (perhaps one release every two weeks) and then a schedule of when things need to be done to be included in that release, which then controls which branches get pulled into the master branch. This is to ensure that there aren’t any disruptions to business customers. Again you can see that this process is blending elements of QA along with Operations which is why the DevOps team approach works so well.

A key idea that has emerged is the concept: “Infrastructure as Code”. In this world all Operations tasks are performed by code. This way things become much more repeatable and much more automated. It’s really this whole idea that you build your infrastructure by writing programs that then do the real work, that has largely led to movement of Developers into Operations.

DevOps

This is where Development and Operations must be working as a team. Operations has to let Developers know what tools (scripts) they require to deploy the new version. They need automated procedures to roll out the new version, convert data and such. They have to be working together very closely to develop all these scripts in the various tools like Jenkins, PowerShell, Maven, Ant, Chef, Puppet or Nexus.

Performing all this work takes a lot of effort. It has to be people’s full time job, or it just won’t get done properly. If people aren’t fully applied to this, manual processes will start to creep in, productivity and quality will suffer.

Beyond successfully deploying the software, this team has to handle things when they go wrong. They need to be able to rollback a version. Reverse the database schema changes and return to a known stable good state.

Summary

DevOps is a whole new profession. Combining many of the skills of Development with those of Operations. People with these skill sets will be in high demand as this is becoming an area that is making or breaking companies on the Web and in the Cloud. No one likes to have outages; no one likes to roll out bad upgrades. In today’s fast paced world, these can put huge pressures on a company. DevOps as a profession and set of operating procedures is a good way to alleviate this pressure while keep up to the fast pace.