Stephen Smith's Blog

All things Sage ERP…

Posts Tagged ‘Sage

Using the Sage 300 .Net Helper APIs

with 8 comments

Introduction

The main purpose of our .Net API is to access our Business Logic Views which I’ve blogged on in these articles:

An Introduction to the Sage 300 ERP .Net API
Starting to Program the Sage 300 ERP Views in .Net
Composing Views in the Sage 300 ERP .Net API
Using the Sage 300 ERP View Protocols with .Net
Using Browse Filters in the Sage 300 ERP .Net API
Using the Sage 300 .Net API from ASP.Net MVC
Error Reporting in Sage 300 ERP
Sage 300 ERP Metadata
Sage 300 ERP Optional Fields

However there are a number of simple things that you need to do repeatedly which would be a bit of a pain to use the Views every time to do these. So in our .Net API we provide a number of APIs that give you efficient quick mechanisms to access things like company, fiscal calendar and currency information.

With Sage Summit 2014 just a few weeks away (you can still register here), I can’t pre-empt any of the big announcements here in my blog (as much as I’d like to), so perhaps a bit of an easier .Net article instead. For many these examples are fairly simple, but I’m always getting requests for source code, and I happen to have a test program that exercises these APIs that I can provide as an examples. This program was to help ensure and debug these APIs for our 64 Bit/Unicode version which might indicate why it tends to print rather a strange selection of fields from some of the classes.

Sample Program

The sample program for this article is a simple WinForms application that uses the Sage 300 ERP API to get various information from these helper classes and then populates a multi-line edit control with the information gathered. The code is the dotnetsample (folder or zip) located in the folder on Google Drive at this URL. The code is hard coded to access SAMLTD with ADMIN/ADMIN as the user id and password. You may need to change this in the session.open to match what you have installed/configured on your local system. I’ve been building and running this using Visual Studio 2013 with the latest SPs and the latest .Net.

dotnetsample

The Session Class

The session class is the starting point for everything else. Besides opening the session get establishing the DBLink, you can use this class to get some useful version information as well as some information about the user like their language.

The DBLink Class

From the session you get a DBLink object that is then your connection to the database and everything in it. From this object you can open any of our Business Logic Views and do any processing that you like. Similarly you can also get quick access to currency and fiscal calendar information from here. Of course you could do much of this by opening various Common Service Views, but this would involve quite a few calls. Additionally the helper APIs provide some caching and calculation support.

The Company Class

Accessing the company property from the DBLink object is your quick shortcut to the various Company options information stored in Common Services in the CS0001 View. This is where you get things like the home currency, number of fiscal periods, whether the company is multi-currency or get address type information. Generally you will need something from here in pretty much anything you do.

The FiscalCalendar Class

You can get a FiscalCalendar object from the FiscalCalendar property of the DBLink. In accounting fiscal periods are very important, since everything is eventually recorded in the General Ledger in a specific fiscal year/fiscal period. G/L mostly doesn’t care about exact dates, but really cares about the fiscal year and period. For accurate accounting you always have to be very careful that you are putting things in the correct fiscal year and periods. In Common Services we setup our financial years and fiscal periods assigning them various starting and ending dates. Corporate fiscal years don’t have to correspond to a calendar year and usually don’t. For instance the Sage fiscal year starts on October 1, and ends on September 30.

This object then gives you methods and properties to get the starting and ending dates for fiscal periods, years or quarters. Further it helps you calculate which fiscal year/period a particular date falls in. Often all these calculations are done for you by the Views, but if you are entering things directly into G/L these can be quite useful. Some of the parameters to these methods are a bit cryptic, so perhaps the sample program will help with anyone having troubles.

The Currency Classes

There are several classes for dealing with currencies, there are the Currency, CurrencyTable and CurrencyRate classes. You get these from the DBLink’s GetCurrency, GetCurrencyTable and GetCurrencyRate methods. There is also a GetCurrencyRateTypeDescription method to get the description for a given Currency Rate Type.

The Currency object contains information for a given currency like the description, number of decimals and decimal separator. Combined with the Currency Rate Type, we have a Currency Table entry for each Currency Code and Currency Rate Type. Then for each of these there are multiple CurrencyRate’s for each Currency on a given date.

So if you want to do some custom currency processing for some reason, then these are very useful objects to use. The sample program for this article has lots of examples of using all of these.

Remember to always test your programs against a multi-currency database. A common bug is to do all your testing against SAMINC and then have your program fail at a customer site who is running multi-currency. Similarly it helps to test with a home currency like Japanese Yen that doesn’t have two decimal places.

Summary

This was just a quick article to talk about some of the useful helper functions in our Sage 300 ERP .Net API that help you access various system data quickly. You can perform any of these functions through the Business Logic Views, but since these are used so frequently, they save a lot of programming time.

Evicting Users from Sage 300 ERP

with 5 comments

Introduction

Generally Sage 300 ERP is used in a multi-user environment where users could be distributed across a large building or located in many different sites. Further Sage 300 ERP uses a concurrent licensing model for users, so if you have 10 Lanpaks then 10 people can login at once; however, it doesn’t matter which ten people it is.

Often companies save a bit of money by buying fewer Lanpaks than users of the product. Perhaps a clerk works the early shift of 7-3 and then when they go home a Financial Accountant runs some Financial Reports. But what happens if that clerk doesn’t sign off? What if they work at home and aren’t answering their phone? Now the Financial Accountant gets a message that all the Lanpaks are in use and can’t get their work done.

Evicting Users

To solve this problem Sage 300 ERP 2014 Product Update 2 will be introducing an Evict Users feature. Previously we provided a detailed list of everyone in the system and what they are doing which I blogged on here.  Now you can also kick them out of the system to recover the Lanpak for someone else to use.

From the Current Users screen there is now a push button to “Sign Out Selected Users”. You then get a dialog with a dire warning and are requested to enter the admin password and confirm to kick out the desired user.

evict

Then in a minute or so, all the screens for that user will be terminated and their Lanpaks will be available for someone else to use.

Technical Details

So how is this accomplished? Basically when you evict a user, that screen will store an encrypted file in the shared data folder. Periodically any Lanpak Managers running will have a look to see if there is a new file and if there is it will see if it is for users they are managing. If so, Lanpak manager will kill the processes of all the screens they are running. The file is left in place for a few minutes, so this particular user won’t be able to sign in again immediately.

This is a fairly simple scheme that is fairly effective for recovering Lanpaks. It works both for regularly run screens from the desktop as well as web deployed screens from the Web Desktop or from Sage CRM.

Since it’s by user, you can kill the ADMIN user which will kill yourself. If all your users sign in as ADMIN then it will kill all the users on the system. So beware that if multiple users share a user id, then they will all be killed and not just one workstations.

Performing Upgrades

Another use case that this method only partially addresses is kicking everyone out of the system so you can perform an upgrade (like a Product Update). Generally a DLL or EXE file in Windows is locked if a running process uses it. Hence you can’t update Sage 300 if someone has a4wapi.dll in use for instance. It could be that this method does gets everyone out of the system, but there are a few cases which may not work:

  • If someone signs off the Sage 300 Desktop but leaves it running. In this case the EXE and DLLs are still in use, but since it isn’t using a Lanpak and isn’t associated with a user, it can’t be killed this way. However I tend to think this is fairly unlikely.
  • It won’t be effective against things that quickly open sessions, do something and then close the session. This would include things like the Sage CRM integration where the custom Sage CRM pages open a session load their data and close the session. However things like this tend to be Web Servers and can usually be stopped remotely or at least from a central place.
  • Things that we allow to use Sage 300 without using a Lanpak. This would include things like parts of the Sage CRM integration, the Sage HRMS integration and the Sage 300 Portal.
  • For third party products, if they are full SDK, it will definitely work on them. If they aren’t full SDK it may or may not work depending on how they are built.

Keep in mind that the main purpose of this feature is to manage Lanpaks, performing upgrades is just a secondary things, where hopefully it helps, but may not be a complete solution.

The Scary Warning

The warning message when you run this is fairly severe. Part of this is because we are killing the processes that are connected to our API. Generally you won’t corrupt the main database because everything is protected by transactioning. So if you do kill the process while they are say posting a batch, it just means the transaction in process will be rolled back and they will have to do it again later. Generally the expectation is that this feature is used once people go home and aren’t doing anything which would be harmless. Further in the user list screen you can see what people are doing, so don’t kill the person running Day End.

However if someone is using the UI and they are resizing columns in a grid, you could catch things at the wrong time and corrupt a *_p.ism file. But these can be deleted and sometimes repaired with ScanIsam. If you are running a non-SDK third party product, I can’t really say what will happen if it’s killed.

Summary

The Evict Users feature is the number one feature as voted on, on our Ideas web site and this is now in the product. So keep making suggestions to https://www11.v1ideas.com/Sage300ERP/Accpac and voting on suggestions that are in the system that you would like to see implemented.

This features will make it easier for companies to manage their Lanpaks and get better value from the system. Hopefully this will also make managing upgrades a bit easier as well.

Written by smist08

May 24, 2014 at 4:17 pm

Some Thoughts on Security

with 2 comments

Introduction

With the recent Heartbleed security exploit in the OpenSSL library a lot of attention has been focused on how vulnerable our computer systems have become to data theft. With so much data travelling the Internet as well as travelling wireless networks, this has brought home the importance of how secure these systems are. With a general direction towards an Internet of Things this makes all our devices whether our fridge or our car possibly susceptible to hackers.

I’ll talk about Heartbleed a bit later, but first perhaps a bit of history with my experiences with secure computing environments.

Physical Isolation

My last co-op work term was at DRDC Atlantic in Dartmouth, Nova Scotia. In order to maintain security they had a special mainframe for handling classified data and to perform classified processing. This computer was located inside a bank vault along with all its disk drives and tape units. It was only turned on after the door was sealed and it was completely cut off from the outside world. Technicians were responsible for monitoring the vault from the outside to ensure that there was absolutely no leakage of RF radiation when classified processing was in progress.

After graduation from University my first job was with Epic Data. One of the projects I worked on was a security system for a General Dynamics fighter aircraft design facility. This entire building was built as a giant Faraday cage. The entrances weren’t sealed, but you had to travel through a twisty corridor to enter the building to ensure there was not line for radio waves to pass out. Then surrounding the building was a large protected parking lot where only authorized cars were allowed in.

Generally these facilities didn’t believe you could secure connections with the outside world. If such a connection existed, no matter how good the encryption and security measures, a hacker could penetrate it. The hackers they were worried about weren’t just bored teenagers living in their parent’s basements, but well trained and financed hackers working for foreign governments. Something like the Russian or Chinese version of the NSA.

Van Eck Phreaking

A lot of attention goes to securing Internet connections. But historically data has been stolen through other means. Van Eck Phreaking is a technique to listen to the RF radiation from a CRT or LCD monitor and to reconstruct the image from that radiation. Using this sort of technique a van parked on the street with sensitive antenna equipment can reconstruct what is being viewed on your monitor. This is even though you are using a wired connection from your computer to the monitor. In this case how updated your software is or how secure your cryptography is just doesn’t matter.

Everything is Wireless

It seems that every now and then politicians forget that cell phones are really just radios and that anyone with the right sort of radio receiver can listen in. This seems to lead to a scandal in BC politics every couple of years. This is really just a reminder that unless something is specifically marked as using some sort of secure connection or cryptography, it probably doesn’t. And then if it doesn’t anyone can listen in.

It might seem that most communications are secure now a days. Even Google search switches to always use https which is a very secure encrypted channel to keep all your search terms a secret between yourself and Google.

But think about all the other communication channels going on. If you use a wireless mouse or a wireless keyboard, then these are really just short range radios. Is this communications encrypted and secure? Similarly if you use a wireless monitor, then it’s even easier to eavesdrop on than using Van Eck.

What about your Wi-Fi network? Is that secure? Or is all non-https traffic easy to eavesdrop on? People are getting better and better at hacking into Wi-Fi networks.

In your car if you are using your cell phone via blue tooth, is this another place where eavesdropping can occur?

Heartbleed

Heartbleed is an interesting bug in the OpenSSL library that’s caused a lot of concern recently. The following XKCD cartoon gives a good explanation of how a bug in validating an input parameter caused the problem of leaking a lot of data to the web.

heartbleed_explanation

At the first level, any program that receives input from untrusted sources (i.e. random people out on the Internet) should very carefully and thoroughly valid any input. Here you can tell it what to reply and the length of the reply. If you give a length much longer than what was given then it leaks whatever random contents of memory were located here.

At the second level, this is an API design flaw, that there should never have been such a function with such parameters that could be abused thus.

At the third level, what allows this to go bad is a performance optimization that was put in the OpenSSL library to provide faster buffer management. Before this performance enhancement, this bug would just have caused an application fault. This would have been bad, but been easy to detect and wouldn’t have leaked any data. At worst it would have perhaps allowed some short lived denial of service attacks.

Mostly exploiting this security hole just returns the attacker with a bunch of random garbage. The trick is to automate the attack to repeatedly try it on thousands of places until by fluke you find something valuable, perhaps a private digital key or perhaps a password.

password-heartbleed-thumb-v1-620x411

Complacency

The open source community makes the claim that open source code is safer because anyone can review the source code and find bugs. So people are invited to do this to OpenSSL. I think Heartbleed shows that security researcher became complacent and weren’t examining this code closely enough.

The code that caused the bug was checked in by a trusted coder, and was code reviewed by someone knowledgeable. Mistakes happen, but for something like this, perhaps there was a bit too much trust. I think it was an honest mistake and not deliberate sabotage by hackers or the NSA. The source code change logs give a pretty good audit of what happened and why.

Should I Panic?

In spite of what some reporters are saying, this isn’t the worst security problem that has surfaced. The holy grail of hackers is to find a way to root computers (take them over with full administrator privileges). This attack just has a small chance of providing something to help on this way and isn’t a full exploit in its own right. Bugs in Java, IE, SQL Server and Flash have all allowed hackers to take over peoples computers. Some didn’t require anything else, some just required tricking the user into browsing a bad web site. Similarly e-mail or flash drive viruses have caused far more havoc than this particular problem. Another really on-going security weakness is caused by government regulations restricting the strength of encryption or forcing the disclosure of keys, these measures do little to help the government, but they really make the lives of hackers easier. I also think that e-mail borne viruses have wreaked much more havoc than Heartbleed is likely to. But I suspect the biggest source of identity theft is from data recovered from stolen laptops and other devices.

Another aspect is the idea that we should be like gazelle’s and rely on the herd to protect us. If we are in a herd of 100 and a lion comes along to eat one of us then there is only a 1/1000 chance that it will be me.

This attack does highlight the importance of some good security practices. Such as changing important passwords regularly (every few months) and using sufficiently complex or long passwords.

All that being said, nearly every website makes you sign in. For web sites that I don’t care about I just use a simple password and if someone discovers it, I don’t really care. For other sites like personal banking I take much more care. For sites like Facebook I take medium care. Generally don’t provide accurate personal information to sites that don’t need it, if they insist on your birthday, enter it a few days off, if they want a phone number then make one up. That way if the site is compromised then they just get a bunch of inaccurate data on you. Most sites ask way too many things. Resist answering these or answer them inaccurately. Also avoid overly nosey surveys, they may be private and anonymous, unless hacked.

The good thing about this exploit, seems to be that it was discovered and fixed mostly before it could be exploited. I haven’t seen real cases of damage being done. Some sites (like the Canadian Revenue Services) are trying to blame Heartbleed for unrelated security lapses.

Generally the problems that you hear about are the ones that you don’t need to worry so much about. But again it is a safe practice to use this as a reminder to change your passwords and minimize the amount of personally identifiable data out there. After all dealing with things like identity theft can be pretty annoying. And this also help with the problems that the black hat hackers know about and are using, but haven’t been discovered yet.

Summary

You always need to be vigilant about security. However it doesn’t help to be overly paranoid. Follow good on-line practices and you should be fine. The diversity of computer systems out there helps, not all are affected and those that are, are good about notifying those that have been affected. Generally a little paranoia and good sense can go a long way on-line.

Written by smist08

April 26, 2014 at 6:51 pm

On Retaining Employees

with 3 comments

Introduction

In a few previous blog posts I’ve been talking about attracting new employees whether through office design, advice for someone starting their career or corporate mobility. In this article I’ll be looking at some ideas on how to keep existing employees. Generally the value of a high tech company largely depends on the IP contained in the heads of the employees and growth prospects depend on their ability to execute.

High Costs of Hiring and Training New People

Hiring new employees is quite time consuming and a slow process. Especially in todays job market which is very hot with all the venture capital that is freely flowing right now. Is this a bubble that will shortly burst? Either way hiring is fairly slow right now. Then any new employee has to take quite a bit of time to learn your ways of doing things and to become familiar with your existing programs and systems.

On the converse new employees do being new ideas, new experiences and new perspectives that greatly help an organization. Having a stream of new employees is very beneficial, but when it becomes a torrent then things get tricky.

Motivations

To retain employees, it isn’t just a matter of higher salaries (though that works well for me), but understanding people’s motivations which may not be intuitive. A good video on people’s motivations is this one. Motivations are really quite complex and much more is involved than just money. This video’s thesis is that you need to pay enough money to take money off the table as an issue, then the priorities become:

Autonomy: people want to be self-directed, they want control over what they do. This is one of the reasons that unstructured time is so successful at so many organizations.

Mastery: people want to have mastery at what they are doing. They need time to learn and practice what they are doing in order to raise their work to a higher level. Often in technical organizations, this is why frequently moving people between projects causes so much dissonance. People aren’t just cogs that do repetitive work that are all interchangeable. This is often confused with resistance to change which is something quite different.

Purpose: People want to make a contribution. They want to see their work being used by happy customers. They want to see their work making other people’s lives better. Putting out poor quality products that annoy people will cause employees to want to leave an organization. Having corporate policies that violate customer’s privacy or do other semi-legal immoral corporate activities will disengage the workforce.

If a company pays a competitive salary then these items will be very important in engaging and retaining employees. But there are still other factors.

Golden Handcuffs

One of my favorite ways to be retained by an employer are golden handcuffs. These are benefits like stock options or future bonuses that you have to remain an employee to collect. Often these can become quite valuable making it a very difficult decision to leave. For instance stock options vest over five years and you can retain them for ten. If your company is growing and its stock is going up then these can become very valuable and walking away from them is as difficult as getting out of handcuffs. Even if you company isn’t public, having these in the hope of going public is a great retention tactic.

golden-handcuffs

Challenging Work

Technical employees like programmers value challenging work where they get to use newer technologies. This keeps people interested via continuous learning and people feel secure in their profession since they know their skills are up to date.

challenges

A lot of times technical people leave an organization because they feel their skills are getting dated and that it’s hard to learn and practice newer practices.

Co-Workers

When performing employee surveys, often the key answers given to the question of why people stay is that they like their co-workers and/or they like their boss. To some degree this comes down to having a very positive work environment. Ensuring everyone treats everyone else with respect and that bad behavior to other people isn’t tolerated.

Another key aspect is when hiring to consider how people will fit in to the current teams and often to give team members a chance to participate in the job interview process to give their input on this.

Probably the most important relationship is between an employee and his boss and this means that ensuring managers are properly trained and that you have good managers is extremely important.

Communications

Having good vertical communications in an organization is critical. A lot of times when people are having problems or not fitting in, they are saying so, just no one is listening. Many times people leave due to misunderstandings or frustrations that they aren’t being heard. Having good clear communications channels is crucial.

Also an organization needs to ensure that all the employees know what the corporate priorities are and also what is the reasoning behind these. People won’t be engaged if they don’t understand why a company is doing something and in fact will often act against it.

Another good practice is to have good coaching and mentoring programs within the organization. These can really help with communications and employee development.

Don’t Reward the Bad

On the converse, you don’t want to retain people at any cost. If people aren’t performing, aren’t engaged or exhibit bad behavior, don’t reward them. Often company’s give out bonus’s anyway because they are worried about losing the employee. But I think in some cases it’s better for everyone if the employee finds a different opportunity. You especially don’t want to do this year after year or people just won’t have confidence in your rewards system.

Summary

Retaining employees doesn’t have to be hard. Generally employees are motivated by things that are also good for the company like pursuing innovation, pursuing learning and staying up to date. Generally a healthy happy workforce is also a productive workforce, so many of these items are in everyone’s interest. When companies lose sight of this, they get themselves into trouble.

 

Written by smist08

March 22, 2014 at 4:15 pm

The Umbrella Ceiling

with 3 comments

Introduction

My wife, Cathalynn, and I were recently discussing issues with people moving to other cities to pursue their careers and the hard decisions that were involved in doing this. My nephew, Ian Smith, is just starting his career and when choosing where to work has to consider what it takes to grow in the role he eventually accepts. When I started at Computer Associates, if you wanted to move up in the organization past a certain point, then you had to move to the company headquarters in New York. Similarly, when Cathalynn was working at Motorola, the upwardly mobile had to relocate to Schaumburg, Illinois.

From Cathalynn Labonté-Smith

Recently, Vancouver hosted a Heritage Classic hockey game at BC Place as have many cities across Canada. An outdoor rink facsimile was made inside an indoor venue to recreate a 1915 game complete with original uniforms and “snow”. The plan was to retract the ceiling on the dome but a torrential downpour kept the giant umbrella deployed. Despite the nostalgia of the game the Vancouver Canucks and Ottawa Senators were playing for real—this game counted for NHL points, so the integrity of the ice had to be maintained.

bcplace1000_120513

We’ve all heard of the glass ceiling. Indeed, yesterday (March 8th) it was International Women’s Day—a day to reflect on all aspects of women’s’ equality and well-being. In the corporate world, how are we doing? According to Catalyst only 4.6% of Fortune 1000 companies have women CEOs (http://www.catalyst.org/knowledge/women-ceos-fortune-1000).

We’ve all heard of hitting the glass ceiling, however; living on the West Coast working in the high technology sector we have what I call an umbrella ceiling that applies to both genders. Umbrella in the down or sun position–you are blessed with a lifestyle that promotes health and well-being with a year-round outdoor playground and cultural diversity. Umbrella in the up or rain position—you are blocked from moving on to a top job within any corporation that has a head office outside of British Columbia you have to leave. We’ve been to many a tearful going away party. But then if you stay as the Smiths have where are roots and family are, you many spend your weekends hiking, snowboarding, cycling, gardening, wine-tasting, cross-border shopping to Seattle and in many other wonderful pursuits, so that’s cool too.

Does it have to continue to be this way? With all the technology like Skype, other teleconferencing software, cloud applications, mobile phones, portals, access to travel and other collaborative tools that are available why do corporations still tend to centralize top officers in one location? Or, can companies truly embrace the mobile workforce including more females at the CEO level. Are they missing out on or losing top talent for this-is-the-way-we’ve-always-done-itism?

I’m turning this over to the expert, Mr. Steve himself. Cat out.

umbrellaceiling

Physical versus Virtual Offices

A lot of discussion comes down to how important is face-to-face interaction. How much can be done virtually via Skype, e-mail, telepresence, chat and other collaborative technologies?

My own experience is that there are a lot of communication problems that can easily be cleared up face-to-face. Often without direct interaction, misunderstandings multiply and don’t get resolved. Probably the worst for this is e-mail. Generally, programmers don’t like to talk on the phone and so will persist with e-mail threads that lead nowhere for far too long rather than just picking up the phone and resolving the issue.

But with video calls so routine can much be handled this way instead and physical meetings kept to a minimum? Another thing that limits interactions is living in different time zones and how much time you have to interact. For example, I have days bookended by early morning and late evening conference calls.

Generally, office design has improved over the years as well to better facilitate team work and collaboration. If you aren’t in this environment are you as productive as the people that are?

Tim Bray leaves Google to stay in Vancouver

A recent high profile case of this was Tim Bray who worked at Google but lives in Vancouver. He gave a quick synopsis on his blog here. Google has a reputation as a modern web cloud company, and yet here is a case where having someone physically present is the most important qualification for the job. If Google can’t solve this problem, does anyone else have a chance?

Though personally it seems that Tim accepted the position at Google with the assumption of moving to California, so it seems a bit passive aggressive, then staying in Vancouver and just pretending he would move.

Mobility of CEOs

The ultimate metric of all this is how mobile is the CEO of a company. Does the CEO have to physically be present in the corporate headquarters for a significant percentage of their time? Does the CEO have to have a residence in the same city as the corporate headquarters? Is even the idea of a physical corporate headquarters relevant anymore in today’s world?

Many top executives spend an awful lot of their time on airplanes and in hotels. To some degree does it really matter where they live? After for modern global companies often to have the necessary face to face time with all the right people can’t be done from the corner office. Is the life of an executive similar to the life of George Clooney in Up in the Air?

I think if the CEO is in a fixed location then the upwardly mobile are going to be attracted to that location like moths to a flame. I think there is a strong fear in people of being out of the loop and for executives this can be quite career limiting.

Summary

I tend to think that face-to-face interaction and working together physically as a team has a lot of merit. Just breaking down the barriers to communications in this sort of tight knit environment can still be challenging.

I find that working remotely works very well for some people. But these people have to be strongly self-motivated and have to be able to work without nearly as much direct supervision or oversight.

I’m finding that the tools for communicating remotely are getting better and better and that this does then allow more people to work remotely, but at this point anyway, we can’t go 100% down this road.

If you have any thoughts on this, leave a comment at the end of the article.

umbrellaceiling2

Unstructured Time at Sage

with 4 comments

Introduction

Unstructured time is becoming a common way to stimulate innovation and creativity in organizations. Basically you give employees a number of hours each week to work on any project they like. They do need to make a proposal and at the end give a demo of working software. The idea is to work on projects that developers feel are important and are passionate about, but perhaps the business in general doesn’t think is worthwhile, too risky or has as a very low priority. Companies like Google and Intuit have been very successful at implementing this and getting quite good results.

dilbert-google-20time

Unstructured Time at Sage

The Sage Construction and Real Estate (CRE) development team at Sage has been using unstructured time for a while now. They have had quite a lot of participation and it has led to products like a time and expense iPhone application. Now we are rolling out unstructured time to other Sage R&D centers including ours, here in Richmond, BC.

At this point we are starting out slowly with 4 hours of unstructured time a sprint (every two weeks). Anyone using this needs to submit a project proposal and then do a demo of working code when they judge it’s advanced enough. The proposals can be pretty much anything vaguely related to business applications.

The goal is for people to work on things they are passionate about. To get a chance to play with new bleeding edge technologies before anyone else. To develop that function, program or feature that they’ve always thought would be great, but the business has always ignored. I’m really looking forward to what the team will come up with.

so-many-toys-so-little-unstructured-time-new-yorker-cartoon

We are still doing Hackathons, Ideajams and our regular innovation process. This is just another initiative to further drive innovation at Sage.

Crazy Projects at Google

Our unstructured time needs to be used for business applications, but I wonder what unstructured time is like at Google where they seem to come up with things that have nothing to do with search or advertising. Is it Google’s unstructured time that leads to self-driving cars, Google Glasses, military robots, human brain simulations or any of their many green projects. Hopefully these get turned into good things and aren’t just Google trying to create SkyNet for real. Maybe we’ll let our unstructured time go crazy as well?

Anathem

I’m a big fan of Neal Stephenson, and recently read his novel Anathem. Neal’s novels can be a bit off-putting since they are typically 1000 pages long, but I really enjoy them. One of the themes in Anathem are monasteries occupied by mathematicians that are divided up into groups by how often they report their results to the outside world. The lower order reports every year, next is a group that reports every ten years, then a group that reports every 100 years and finally the highest group that only reports every 1000 years. These groups don’t interact with anyone outside their order except for the week when they report and exchange information/literature with the outside world. This is in contrast to how we operate today where we are driven by “internet time” and have to produce results quickly and ignore anything that can’t be done quickly.

So imagine you could go away for a year to work on a project, or go away for ten years to work on something. Perhaps going away for 100 years or 1000 years might pose some other problems that the monks in the novel had to solve. The point being is to imagine what you could accomplish if you had that long? Would you use different research approaches and methods than we use typically today? Certainly an intriguing prospect contrasting where we currently need to produce something every few months.

My Project

So why am I talking about Anathem and unstructured time together? Well one problem we have is how do you get started on big projects with lots of risk? Suppose you know we need to do something, but doing it is hard and time consuming? Every journey has to start with the first step, but sometimes making that first step can be quite difficult. I’ve had the luxury of being able to do unstructured time for some time, because I’m a software architect and not embedded in an agile sprint team. So I see technologies that we need to adopt but they are large and won’t be on Product Manager’s road maps.

So I’ve done simple POC’s in the past like producing a mobile app using Argos. But more recently I embarked on producing a 64-Bit version of Sage 300. This worked out quite well and wasn’t too hard to get going. But then I got ambitious and decided to add Unicode into the mix. This is proving more difficult, but is progressing. The difficulty with these projects is that they involve changing a large amount of the existing code base and estimating how much work they are is very difficult. As I get a Unicode G/L going, it becomes easier to estimate, but I couldn’t have taken the first step on the project without using unstructured time.

Part of the problem is that we expect our Agile teams to accurately estimate their work and then rate them on how well they do this (that they are accountable for their estimates). This has the side effect that they are then very resistant to work on things that are open ended or hard to estimate. Generally for innovation to take hold, the performance management system needs a bit of tweaking to encourage innovation and higher risk tasks, rather than only encouraging meeting commitments and making good estimates.

Now unlike Anathem, I’m not going to get 100 years to do this or even 10 years. But 1 year doesn’t seem so bad.

Summary

Now that we are adding unstructured time to our arsenal of innovation initiatives, I have high hopes that we will see all sorts of innovative new products, technologies and services emerge out of the end. Of course we are just starting this process, so it will take a little while for things to get built.

Multi-Threading in Sage 300

with 6 comments

Introduction

In the early days of computing you could only run one program at a time on a PC. This meant if you wanted to run 10 programs at once you needed 10 computers. Then bit by bit multitasking made its way from mainframes and Unix to PCs, which allowed you to run quite a few programs at a time. Doing this meant you could run all 10 programs on one computer and this worked quite well. However it was still quite a high overhead since each program used a lot of memory and switching between them wasn’t all that fast. This lead to the idea of multi-threading where you ran very light weight tasks inside a single program. These used the same memory and resources as the program they were running in, so switching between them was very quick and the resources used adding more threads was very minimal.

Enter the Web

Think about how this affects you if you are building a web server. You want to basically run your programs on the web server and consider if you are running in the cloud. If you were single process then each web user running your app would have a separate VM to handle his requests and he would interact with that VM. There would be a load balancer that routes the users requests to the appropriate VM. This is quite an expensive way to run since you typically pay quite a bit a month for each VM. You might be surprised to learn that there are quite a few web applications that run this way. The reason they do this is for greater security since in this model each user is completely separated from each other since they are really running against separate machines.

The next level is to have the web server start a separate process to handle the requests for a given user. Basically when a new user signs on, a new process is started and all his requests are routed to this process. This model is typically used by applications that don’t want to support multi-threading or have other concerns. Again quite a few web applications run this way, but due to the high resource overhead of each process, you can only run at best a hundred or so users per server. Much better than one per VM, but still for the number of customers companies want to use their web site, this is quite expensive.

The next level of efficiency is to have each new user that signs on, just start a new thread. This is then way less overhead since you use only a small amount of thread local storage and switching between running threads is very quick. Now we are getting into have thousands of active users running off each web server.

tangled_threads_small1

This isn’t the whole story. The next step is to make your application stateless. This means that rather than each user getting their own thread, we put all the threads in a common pool. Then when a request for a user comes in, we just use a free thread from the pool to process the request. This way we don’t keep any state on the server for each user, and we only need the number of threads to be able to handle the number of active requests at a given time. This means while a user is thinking or reading a response, they are using no server resources. This is how you get a web applications like Facebook that can handle billions of users (of course they still use tens of thousands of servers to do this).

These techniques aren’t only done in the operating system software, but modern hardware architectures have been optimized for these techniques as well. Modern server CPUs have multiple cores which are very efficient at running multiple threads in parallel. To really take advantage of the power of these processors you really need to be a multi-threaded application.

Sage 300 ERP

As Sage 300 moves to the cloud, we have the same concerns. We’ve been properly multi-process since our 32-Bit version, back in the version 4 days (the 16-Bit version wasn’t really multi-process because 16-Bit Windows wasn’t properly multi-process).

We laid the foundations for multi-threaded operation in version 5.6A and then fully used it starting with version 6.0A for the Portal and Quote to Orders. Since then we’ve been improving our multi-threading as it is a very foundational component to being able to utilize our Business Logic Views from Web Applications.

If you look at a general text book on multi-threading it looks quite difficult since you are having to be very careful to protect the right memory at the right time. However a lot of times these books are looking at highly efficient parallel algorithms. Whereas we want a thread to handle a specific request for a specific user to completion. We never use multiple threads to handle a single request.

From an API point of view this means each thread has its own .Net session object and its own set of open Sage 300 Business Logic Views. We keep these cached in a pool to be checked out, but we never have more than one thread operating on one of these at a time. This then greatly simplifies how our multi-threading support needs to work.

If you’ve ever programmed our Business Logic Views, they have had the idea of being multi-threaded built into them from day 1. For instance all variables that need to be kept from call to call are stored associated with the view handle. There are no global variables in these Views. Further since even single threaded programs open multiple copies of the Views and use the recursively, a lot of this support has been fully tested since it’s required for these cases as well.

For version 5.6A we had to ensure that our API had thread safe alternatives for every API and that any API that wasn’t thread safe was deprecated. The sort of thing that causes threading problems is if an API function say just returns TRUE or FALSE on whether it succeeds and then if you want to know the real reason you need to check a global variable for the last error return code. The regular C runtime has a number of functions of this nature and we used to do this for our BCD processing. Alternatives to these functions were added to just return the error code. The reason the global variable is bad, is that another thread could call one of these functions and reset this variable in between you getting the failed response and then checking the variable.

State

If you’ve worked with our Views you will know that they are quite state-full. We can operate statelessly for simple operations like basic CRUD operations on simple objects. However for complicated data entry (like Order Entry or Invoice Entry) we do need to keep state while the user interacts with the document. This means we aren’t 100% stateless at this point, but our hope is that as we move forwards we can work to reduce the amount of state we keep, or reduce the number of interactions that require keeping state.

Testing Challenges

Fortunately testing tools are getting better and better. We can test using the Visual Studio Load Tester as well as using JMeter. Using these tools we can uncover various resource leaks, memory problems and deadlocks which occur when multiple threads go wrong. Static code analysis tools and good old fashioned code reviews are very useful in this regard as well.

Summary

As we progress the technology behind Sage 300, we need to make sure it has the foundations to run as a modern web application and our multi-threading support is key to this endeavor.

 

Written by smist08

February 22, 2014 at 5:37 pm

10 Questions for Sage Uncle Steve

with 3 comments

This is a guest blog posting by my wife, Cathalynn Labonté-Smith, though I’m the one answering the questions.

***

It may seem odd to readers to interview the man I’ve looked across the dinner table at for 29 plus years in his own blog, but we’ve had a recent addition to our household, Ian. Steve’s nephew is an enthusiastic young man who is in a programmer’s boot camp (see Steve’s Blog entry The Times They Are a Changin) and as an educator this has brought to my mind new questions for my darling husband beyond, “How was your day?” and “Will you be able to fit in a vacation around your business travel this year?” Also, he didn’t like my alternate idea of a Valentine to Computing.

We got out of the habit of talking about the details of Steve’s work since the time I worked as a technical writer in the field of wireless technology nearly a decade ago. For couples out there who both work in the same or related fields, you will know what I mean when I say it’s just best to unwind and avoid topics to do with work in the off hours.

When I left tech writing and became a teacher, occasionally I’d walk into a business class that was learning Accpac for Windows or Simply Accounting. Trained as an English teacher I’d do what all on-call teachers do when outside their subject area: stick to the lesson plan, get help from the brightest students in the class and muddle through as best I could. So it was fun to share those experiences with Steve and I actually learned a bit about the Sage products.

It’s been many years since I’ve been in the classroom, but having taught career preparation I want to know the following from Steve for programmers coming on stream. I know that Steve’s blog audience is unlikely to be junior programmers but I thought this might get his more senior executive readers thinking about what legacy they can pass along to new programmers.

Whoa, I can hear you say, what makes you think they can hear us with their ears jammed with ear buds and if they could we don’t speak their lingo? I’m not saying they’re going to sit through a PowerPoint of your ruminations and really the best example is modelling, after all, and as a teacher I found that it was an equal exchange. You can learn as much from your novice employees as they can learn from you–just about different things.

When I met Steve he was a Teacher’s Assistant in the Math Department at the University of British Columbia working on his Master’s Degree. His Math 100 class was just him, the blackboard, a huge lecture hall packed full of nervous first-years and a piece of chalk. I was never his student, no; I was on the other side of campus in Creative Writing workshops in poetry, fiction and children’s writing.

After his degree, he worked at various software companies in many different fields as a contractor, consultant or employee before finding his long-time home at Sage. Aside from having over twenty years at Sage now in his current role as Chief Architect, I’m curious as what Uncle Steve would say to Ian if he were around longer than it takes for him to gulp down his dinner and head upstairs for more studying?

1. Steve, what kind of guidance can you offer for formal programs a would-be programmer should choose for the best future employment and advancement? Can you compare it to your formal programming education?

A. I learned to program originally in Grade 11. Nowadays people have lots of opportunity to learn how to program at a young age. There are quite a few exceptional online programs where you can learn program, for example, Khan Academy. Khan Academy teaches you to program in JavaScript while creating fun drawings and animations. Programming like most skills requires practice to master. In the book Outliers, Malcolm Gladwell maintains that it takes 10,000 hours of practice to really master something, so starting early really helps.

My undergraduate and master’s degrees are in Mathematics and not Computer Science. However’ I took a few CS courses along the way (in things like Numerical Analysis and Operations Research), so strictly speaking I don’t have a formal CS background.

I was in the Co-op program at the University of Victoria so when I did graduate I had four work terms of job experience. Plus, I was always working on some sort of programming project on my trusty Apple II Plus computer (usually involving Fractals).

It doesn’t really matter so much which programming languages you learn, just learn a variety. After all, things are changing so fast these days that you need to expect to keep learning these as you progress through your career.

To summarize, you need something that will give you lots of practice programming, a few formal courses to give you credibility and you need to be a voracious reader.

2. In your undergraduate degree, you went through a co-op program. Is this something that you recommend and why? For example, does it make a programmer more desirable as a future employee?

A. Yes, absolutely. I think intern type programs are terrific ways to get job experience and references ready for that first real job. I did four co-op work terms and learned an awful lot about how various companies operate and what is involved. It is a great chance to get some experience with a variety of companies, perhaps a large one, a small one and a government one. I certainly give credit for co-op work terms when I’m hiring.

3. What kind of summer, part-time or volunteer work might add to and develop their skills?

A. I would look for something where you are giving back to the community, such as donating your time to a charity and if you have the chance to travel when you do this then even better. Again do something that interests you and you are passionate about.

4. What kind of advice can you give new programmers about how to pick their first employer?

A. Chances are you are going to have several jobs throughout your career. More than likely the pay will be similar, so go for something interesting. Do some research on the companies you are applying to and look beyond the initial job you will have there. Also, consider travelling to a new location for your first job to get a bit more experience of the world as well.

5. Just like some doctors are better at staying current on the latest treatments and research, how do programmers stay current when there seems to be so many new technologies and programming languages to learn. How do you manage to filter through all of it to get what will last and have future value? Or is it even critical that programmers do stay current or is there enough maintenance work to go around forever?

A.  I think the number one rule is to not rely on your employer for this. This is really your own professional responsibility. Employers will train you for what you need immediately but usually not for much else and not for things that they aren’t interested in.

One of the great things about the profession today is that most of the programming tools that are important are either open source or have free versions available (like Visual Studio Express). So you can dabble with all sorts of things in your spare time. All you really need is a computer and an Internet connection. I really believe in learning by doing. So pick something new and interesting and do a small project in it to see if you want to go deeper.

6. What are some common pitfalls new programmers could avoid in their early careers?

A. I think the most common pitfalls are either being too loyal to a company or giving up on a company too easily.

Often people in their career have very high and probably unrealistic expectations on how well a company is run. Often this gives rise to a lot of changing jobs after quick stints. This can be a mistake if you don’t get ahead and develop a resume with lots of short stays.

The reverse is the other common mistake—being in a job that doesn’t work, but trying to stick it out too long rather than cutting the cord. Leaving is often a hard decision to make, but is often easier earlier in your career. Finding the right compromise between these two extremes can be very difficult.

7. What is the most valuable lesson or lessons that you’ve learned throughout your career that you could share with a new programmer?

A. That things are often darkest before the dawn. On any project at some point things are going to look bad, problems look unsolvable, bugs are piling up and deadlines are being missed. The lesson here is not to take the whole world’s problems on your shoulders, but to just work through the problems one by one. Often these are difficult problems that take much more time than you would have thought, but sticking to this eventually yields the light at the end of the tunnel.

Another take on this is to remain optimistic in the face of adversity. Or follow the Hitchhiker’s Guide to the Galaxy’s main advice: Don’t Panic! (Their other advice of always carry a towel, I’m not so sure about).

don__t_panic_and_carry_a_towel

8. Who were your early role models?

A. Bill Gates and Steve Wozniak for what they did to start their companies. Steve Jobs for what he did when he returned to Apple.

9. Is there anything you would have done differently in your early career knowing what you know now?

A. There are always so many shoulda coulda wouldas. Now I know which companies back then paid the big bucks in stock options, but it’s hard to predict when looking forwards. I sometimes wonder if I should have moved from Vancouver, but then you get a beautiful day like today and just say “Nah”.

10. Is there a question that I didn’t ask that you wished I did?

A. No, this blog is already getting quite long J.

Point taken, Steve, this is a good place to wrap it up. Oh and, Happy Valentine’s Day, to you and to all your readers.

Written by smist08

February 15, 2014 at 4:34 pm

The Sage 300 System Manager Core DLLs

with 9 comments

Introduction

We hold a developer’s exchange (DevEx) every couple of weeks where one of our developers volunteers to present to all the other developers in our office. This past week I presented at the DevEx on what all the core DLLs in our Sage 300 runtime folder do. I thought this might be of interest for a wider audience so here are the gory details.

Architecture

Our marketing supplied architecture diagram is the following which highlights our three tiers and hide a lot of the details of how the object repository, APIs and supporting services are implemented. I’ve blogged previously on our Business Logic Views. In this article I’m going to go into more detail on all the DLLs that provide the framework to support all of this.

arch

Lower Level DLLs

If you are an ISV developing Sage 300 SDK applications or have worked for Sage on the 300 product then you will have had to encounter a number of these DLLs. I’m only looking at a subset of current DLLs, and I’m not looking at all the DLLs that support older technologies that are still present to maintain compatibility with add-ons.

lowleveldlls

I didn’t add arrows to this diagram since everything pretty well calls everything else below it. But segregated the DLLs a bit by how low or high level they are. So here is a quick synopsis of each one:

A4wcompat.dll: We created this DLL back when we did a native port of the Sage 300 Views for Linux. This DLL isolates operating system differences that need more than some clever #defines. A big part of this is the thread and process synchronization and locking support. Even though we never released the native Linux version, this isolation of operating system dependent parts had made adding multi-threading support, 64 bit support and Unicode support easier.

A4wmem32.dll: In 16 bit Windows, the built in memory management was really slow, so everyone used their own. Now this DLL uses the Windows and C default memory management, but is still important for global memory that needs to be shared across processes. Originally this was done through the data segment of a fixed DLL, but now is done through memory mapped files.

A4wlleng.dll: This is just a language DLL that holds some lower level error messages used by System manager.

A4wsqls.dll: This is the SQL Server database driver (there is also a4worcl.dll for Oracle and a4wbtrv.dll for Pervasive.SQL). This is dynamically loaded based on the type of database you are connecting to. For more on our database support see this article.

Cato3msk.dll, cato3dat.dll: The cato3 DLLs are the old CA common controls. We don’t use these in our UIs anymore, but cato3msk.dll provides our mask processing that is used by the Views. Similarly we don’t use this date control, but do use a routine here to format dates in error messages correctly.

A4wroto.dll: This handles the loading of the various View DLLs as well as the various UIs we’ve used in the past. It loads the roto.dat files and handles loading the right DLLs when View subclassing is going on or stub Views need to be used.

A4wsem.dll: This handles the locking of the semaphor.bin file. It allows processes to lock the company database, an application or the whole site. It also handles application specific cross workstation locking needs.

A4wrv.dll: This is the main DLL API entry point for the Views. It manages all the calling of the Views and handles other tasks like sending the calls for macro recording. For more on our View interfaces see this article.

A4wapi.dll: This is quite a hodge-podge of services for the Views like revision lists, error reporting and such. It also has support routines for the older CA-Realizer UIs. This is quite a big DLL and has most of our C level API in it.

A4wrpt.dll: This is our interface to Crystal Reports, it started as our interface to CA-RET then was converted to Crystal using their CRPE DLL interface, then converted to Crystal’s COM interface and now uses Crystal’s .Net Interface.

A4wprgt.dll: This DLL handles replicating the system database tables into the various company databases when needed.

A4wmtr.dll: This is our meter DLL for long running processes. It can either put up a meter dialog or just report back to the caller, the current status and percent complete. It also provides the API for cancelling long running processes.

Higher Level APIs

The next level are some of the DLLs that make up our Java, COM and .Net interfaces. There is a bit of complexity here due to how our previous web deployed system worked. Here we could communicate back to the server originally using DCOM and then later with .Net Remoting. The .Net Remoting layer provides both the communications layer for this web deployed mode and also acts as our .Net API. Depending on how you create your original session will configure which actual DLLs are used and which are calling conventions are used.

higherlevelapis

A4wapiShim.dll: This is the C side of our Java JNI layer. It talks to all the lower level DLLs to get its work done.

Sajava.jar: This is the Java side of our Java JNI interface. This allows Java programs to easily call Java classes to interface to our Business Logic Views. For more on this interface see this article.

A4wcomsv.dll: This is the main workhorse for the COM and .Net APIs. It does all the heavy lifting and interfacing to the core DLLs.

Accpac.Advantage.COMSVR.Interop.dll: This just performs the .Net to COM transition which is created by the MS tools.

Accpac.Advantage.Server.dll: Server side of the .Net API, handles the .Net Remoting requests if remotely called or just passes through otherwise.

Accpac.Advantage.Types.dll: Defines all the various types we use in our .Net API.

Accpac.Advantage.dll: This is the main external interface for our .Net API. For more on our .Net API see the series of articles starting with this one.

A4wcomexps.dll: Used when the VB UIs are going to talk .Net Remoting, this DLL is inside a4wcomex.cab.

A4wcomex.dll: The main entry point for the COM API.

Many More DLLs

There are many more DLLs in the Sage 300 runtime, but most of the others are for obsolete APIs  like the xapi, the older a4wcom COM API, the cmd API, the icmd API, etc. There are other important ones like to do with Database Setup, but these are the main ones used when you talk to the Business Logic through one of the main popular APIs.

Summary

For anyone interested this should give you a good idea of what the main DLLs in the runtime folder do. And give you an idea of how the various services in Sage 300 ERP are layered.

Written by smist08

February 8, 2014 at 5:19 pm

The Times They Are a-Changin’

with 7 comments

Introduction

Right now we have our nephew Ian living with us as he takes a Lighthouse Labs developer boot camp program in Ruby on Rails and Web Programming. This is a very intense course that has 8 weeks instruction and then a guaranteed internship of at least 4 weeks with a sponsoring company. A lot of this is an immersion in the current high tech culture that has developed in downtown Vancouver. This corresponds with myself working to expand the Sage 300 ERP development team in Richmond and our hiring efforts over the past several months. This article is then based on a few observations and experiences around these two happenings.

Sage 300 ERP has been around for over thirty years now and this has caused us to have quite a few generations of programmers all working on the product. Certainly over this time the various theories of what a high tech office should look like and what a talented programmer wants in a company has changed quite dramatically. As Sage moves forwards we need to change with the times and adopt a lot of these new ways of doing things and accommodate these new preferred lifestyles.

Generally people go through three phases of their career, starting single, no kids, renting to transitioning to married, home ownership and eventually kids to kids leaving home and considering retiring. Of course these days there can be some major career changes along the way as industries are disrupted and people need to retrain and reeducate themselves. Every office needs a good mix, to build a diverse, energetic and innovative culture, which has experience but is still willing to take risks.

Offices or No Offices

When I started with Accpac at Computer Associates, we were largely a cube farm perhaps not to dis-similar to the picture below.

cube_farm_9oy90q0wnv

The ambition was to have as much privacy as possible which usually translated to high cube walls, other barriers and the ambition to one day move into an office. At the time Microsoft advertised that on their campus every employee got an office, so they could concentrate and think to be more effective at their work. I visited the Excel team at this time and they had two buildings packed with lots of very small offices which led to long narrow claustrophobic hallways.

A lot has changed since then. Software development has much more adopted the Scrum/Agile model where people work together as a team and social interactions are very important. Further as products move to the cloud, the developers need to team up with DevOps and all sorts of other people that are crucial for their product’s success.

Now most firms adopt more open office approach. There are no permanent offices, everyone works together as a team.

High-Tech-City-2

There is a lot of debate about which is better. People used to more privacy of offices and cubes are loathe to lose that. People that have been using the open office approach can’t imagine moving back to cubes. Also with more people working a percentage of their time from home, a permanent spot at the offices doesn’t always make sense.

Downtown versus the Suburbs

When I started with CA the office was located in town near Granville Island. This was a great location, central, many good restaurants, and easily accessible via transit. Then we moved out to Richmond to a sprawling high tech park like many of the similar companies in the 90s. These were all sprawling landscapes of three story office buildings each one with a giant parking lot surrounding it. All very similar whether in Richmond, Irvine, Santa Clara or elsewhere.

Now the trend is reversing and people are moving back to downtown. Most new companies are located in or near downtown and several large companies have setup major development centers in town recently. Now the high tech parks in the suburbs are starting to have quite a few vacancies.

The Younger Generation

A lot of this is being driven by the twenty-something generation. What they look for in a company is quite different today than what I looked for when I started out. There are quite a few demographic changes as well as lifestyle changes that are driving this. A few key driving factors are:

  • The number of young people getting drivers licenses and buying cars is shrinking. There are a lot of reasons for this. But people who can’t drive have trouble getting to the suburbs.
  • People are having children later in life. Often putting it off until their late thirties or even forties.
  • City cores are being re-vitalized. Even Calgary and Edmonton are trying to get urban sprawl under control.
  • Real estate in the desirable high tech centers like San Francisco, Seattle or Vancouver is extremely expensive. Loft apartments downtown are often the way to go.
  • Much more work is done at home and if coffee shops.

This all makes living and working downtown much more preferable. It is also leading to people requiring less space and looking for more social interactions.

Hiring that Younger Generation

To remain competitive a company like Sage needs to be able to hire younger people just finishing their education. We need the infusion of youth, energy and new ideas. If a company doesn’t get this then it will die. Right now the hiring market is very competitive. There is a lot of venture capital investment creating hot new companies, many existing companies are experiencing good growth and generally the percentage of the economy driven by high tech is growing. Another problem is that industries like construction, mining and oil are booming, often hiring people at very high wages before they even think about post-secondary education.

What we are finding is that many young people don’t have cars, live downtown and are looking to work in a cool open office concept building.

We are in the process of converting our offices to a more modern open office environment. We do allow people to work at home some days. Maybe we will even be able to move back downtown once the current lease expires? Or maybe we will need to create a satellite office downtown.

Generally we have to become more involved with both the educational institutions by hiring co-op students and other interns. We need to participate in more activities of the local developer and educational community like the HTML500. We need to ensure that Sage is known to the students and that they consider it a good career path to embark on. Often hiring co-op students now can lead to regular full time employees later.

Since Sage has been around for a long time and has a large solid customer base, we offer a stable work environment. You know you will receive your next pay check. Many startups run out of funding or otherwise go broke. Often while the job market is hot, young people don’t worry about this too much, but as you get into a mortgage, this can become more important.

Summary

The times are changing and not only do our developers need to keep retraining and learning how to do things differently, but so do our facilities departments, IS departments and HR departments. Change is often scary, but it is also exciting and stops life from becoming boring.

Personally, I would much rather work downtown (I already live there). I think I will be sad when I give up my office, but at the same time I don’t want to become the stereotypical old person yelling at the teenagers to get off my lawn. Overall I think I will prefer a more mobile way of working, not so tied to my particular current office.

 

Written by smist08

February 1, 2014 at 5:50 pm

Follow

Get every new post delivered to your Inbox.

Join 245 other followers