Stephen Smith's Blog

All things Sage ERP…

Posts Tagged ‘sage 300 erp

Using the Sage 300 .Net Helper APIs

with 8 comments

Introduction

The main purpose of our .Net API is to access our Business Logic Views which I’ve blogged on in these articles:

An Introduction to the Sage 300 ERP .Net API
Starting to Program the Sage 300 ERP Views in .Net
Composing Views in the Sage 300 ERP .Net API
Using the Sage 300 ERP View Protocols with .Net
Using Browse Filters in the Sage 300 ERP .Net API
Using the Sage 300 .Net API from ASP.Net MVC
Error Reporting in Sage 300 ERP
Sage 300 ERP Metadata
Sage 300 ERP Optional Fields

However there are a number of simple things that you need to do repeatedly which would be a bit of a pain to use the Views every time to do these. So in our .Net API we provide a number of APIs that give you efficient quick mechanisms to access things like company, fiscal calendar and currency information.

With Sage Summit 2014 just a few weeks away (you can still register here), I can’t pre-empt any of the big announcements here in my blog (as much as I’d like to), so perhaps a bit of an easier .Net article instead. For many these examples are fairly simple, but I’m always getting requests for source code, and I happen to have a test program that exercises these APIs that I can provide as an examples. This program was to help ensure and debug these APIs for our 64 Bit/Unicode version which might indicate why it tends to print rather a strange selection of fields from some of the classes.

Sample Program

The sample program for this article is a simple WinForms application that uses the Sage 300 ERP API to get various information from these helper classes and then populates a multi-line edit control with the information gathered. The code is the dotnetsample (folder or zip) located in the folder on Google Drive at this URL. The code is hard coded to access SAMLTD with ADMIN/ADMIN as the user id and password. You may need to change this in the session.open to match what you have installed/configured on your local system. I’ve been building and running this using Visual Studio 2013 with the latest SPs and the latest .Net.

dotnetsample

The Session Class

The session class is the starting point for everything else. Besides opening the session get establishing the DBLink, you can use this class to get some useful version information as well as some information about the user like their language.

The DBLink Class

From the session you get a DBLink object that is then your connection to the database and everything in it. From this object you can open any of our Business Logic Views and do any processing that you like. Similarly you can also get quick access to currency and fiscal calendar information from here. Of course you could do much of this by opening various Common Service Views, but this would involve quite a few calls. Additionally the helper APIs provide some caching and calculation support.

The Company Class

Accessing the company property from the DBLink object is your quick shortcut to the various Company options information stored in Common Services in the CS0001 View. This is where you get things like the home currency, number of fiscal periods, whether the company is multi-currency or get address type information. Generally you will need something from here in pretty much anything you do.

The FiscalCalendar Class

You can get a FiscalCalendar object from the FiscalCalendar property of the DBLink. In accounting fiscal periods are very important, since everything is eventually recorded in the General Ledger in a specific fiscal year/fiscal period. G/L mostly doesn’t care about exact dates, but really cares about the fiscal year and period. For accurate accounting you always have to be very careful that you are putting things in the correct fiscal year and periods. In Common Services we setup our financial years and fiscal periods assigning them various starting and ending dates. Corporate fiscal years don’t have to correspond to a calendar year and usually don’t. For instance the Sage fiscal year starts on October 1, and ends on September 30.

This object then gives you methods and properties to get the starting and ending dates for fiscal periods, years or quarters. Further it helps you calculate which fiscal year/period a particular date falls in. Often all these calculations are done for you by the Views, but if you are entering things directly into G/L these can be quite useful. Some of the parameters to these methods are a bit cryptic, so perhaps the sample program will help with anyone having troubles.

The Currency Classes

There are several classes for dealing with currencies, there are the Currency, CurrencyTable and CurrencyRate classes. You get these from the DBLink’s GetCurrency, GetCurrencyTable and GetCurrencyRate methods. There is also a GetCurrencyRateTypeDescription method to get the description for a given Currency Rate Type.

The Currency object contains information for a given currency like the description, number of decimals and decimal separator. Combined with the Currency Rate Type, we have a Currency Table entry for each Currency Code and Currency Rate Type. Then for each of these there are multiple CurrencyRate’s for each Currency on a given date.

So if you want to do some custom currency processing for some reason, then these are very useful objects to use. The sample program for this article has lots of examples of using all of these.

Remember to always test your programs against a multi-currency database. A common bug is to do all your testing against SAMINC and then have your program fail at a customer site who is running multi-currency. Similarly it helps to test with a home currency like Japanese Yen that doesn’t have two decimal places.

Summary

This was just a quick article to talk about some of the useful helper functions in our Sage 300 ERP .Net API that help you access various system data quickly. You can perform any of these functions through the Business Logic Views, but since these are used so frequently, they save a lot of programming time.

Evicting Users from Sage 300 ERP

with 5 comments

Introduction

Generally Sage 300 ERP is used in a multi-user environment where users could be distributed across a large building or located in many different sites. Further Sage 300 ERP uses a concurrent licensing model for users, so if you have 10 Lanpaks then 10 people can login at once; however, it doesn’t matter which ten people it is.

Often companies save a bit of money by buying fewer Lanpaks than users of the product. Perhaps a clerk works the early shift of 7-3 and then when they go home a Financial Accountant runs some Financial Reports. But what happens if that clerk doesn’t sign off? What if they work at home and aren’t answering their phone? Now the Financial Accountant gets a message that all the Lanpaks are in use and can’t get their work done.

Evicting Users

To solve this problem Sage 300 ERP 2014 Product Update 2 will be introducing an Evict Users feature. Previously we provided a detailed list of everyone in the system and what they are doing which I blogged on here.  Now you can also kick them out of the system to recover the Lanpak for someone else to use.

From the Current Users screen there is now a push button to “Sign Out Selected Users”. You then get a dialog with a dire warning and are requested to enter the admin password and confirm to kick out the desired user.

evict

Then in a minute or so, all the screens for that user will be terminated and their Lanpaks will be available for someone else to use.

Technical Details

So how is this accomplished? Basically when you evict a user, that screen will store an encrypted file in the shared data folder. Periodically any Lanpak Managers running will have a look to see if there is a new file and if there is it will see if it is for users they are managing. If so, Lanpak manager will kill the processes of all the screens they are running. The file is left in place for a few minutes, so this particular user won’t be able to sign in again immediately.

This is a fairly simple scheme that is fairly effective for recovering Lanpaks. It works both for regularly run screens from the desktop as well as web deployed screens from the Web Desktop or from Sage CRM.

Since it’s by user, you can kill the ADMIN user which will kill yourself. If all your users sign in as ADMIN then it will kill all the users on the system. So beware that if multiple users share a user id, then they will all be killed and not just one workstations.

Performing Upgrades

Another use case that this method only partially addresses is kicking everyone out of the system so you can perform an upgrade (like a Product Update). Generally a DLL or EXE file in Windows is locked if a running process uses it. Hence you can’t update Sage 300 if someone has a4wapi.dll in use for instance. It could be that this method does gets everyone out of the system, but there are a few cases which may not work:

  • If someone signs off the Sage 300 Desktop but leaves it running. In this case the EXE and DLLs are still in use, but since it isn’t using a Lanpak and isn’t associated with a user, it can’t be killed this way. However I tend to think this is fairly unlikely.
  • It won’t be effective against things that quickly open sessions, do something and then close the session. This would include things like the Sage CRM integration where the custom Sage CRM pages open a session load their data and close the session. However things like this tend to be Web Servers and can usually be stopped remotely or at least from a central place.
  • Things that we allow to use Sage 300 without using a Lanpak. This would include things like parts of the Sage CRM integration, the Sage HRMS integration and the Sage 300 Portal.
  • For third party products, if they are full SDK, it will definitely work on them. If they aren’t full SDK it may or may not work depending on how they are built.

Keep in mind that the main purpose of this feature is to manage Lanpaks, performing upgrades is just a secondary things, where hopefully it helps, but may not be a complete solution.

The Scary Warning

The warning message when you run this is fairly severe. Part of this is because we are killing the processes that are connected to our API. Generally you won’t corrupt the main database because everything is protected by transactioning. So if you do kill the process while they are say posting a batch, it just means the transaction in process will be rolled back and they will have to do it again later. Generally the expectation is that this feature is used once people go home and aren’t doing anything which would be harmless. Further in the user list screen you can see what people are doing, so don’t kill the person running Day End.

However if someone is using the UI and they are resizing columns in a grid, you could catch things at the wrong time and corrupt a *_p.ism file. But these can be deleted and sometimes repaired with ScanIsam. If you are running a non-SDK third party product, I can’t really say what will happen if it’s killed.

Summary

The Evict Users feature is the number one feature as voted on, on our Ideas web site and this is now in the product. So keep making suggestions to https://www11.v1ideas.com/Sage300ERP/Accpac and voting on suggestions that are in the system that you would like to see implemented.

This features will make it easier for companies to manage their Lanpaks and get better value from the system. Hopefully this will also make managing upgrades a bit easier as well.

Written by smist08

May 24, 2014 at 4:17 pm

Unstructured Time at Sage

with 4 comments

Introduction

Unstructured time is becoming a common way to stimulate innovation and creativity in organizations. Basically you give employees a number of hours each week to work on any project they like. They do need to make a proposal and at the end give a demo of working software. The idea is to work on projects that developers feel are important and are passionate about, but perhaps the business in general doesn’t think is worthwhile, too risky or has as a very low priority. Companies like Google and Intuit have been very successful at implementing this and getting quite good results.

dilbert-google-20time

Unstructured Time at Sage

The Sage Construction and Real Estate (CRE) development team at Sage has been using unstructured time for a while now. They have had quite a lot of participation and it has led to products like a time and expense iPhone application. Now we are rolling out unstructured time to other Sage R&D centers including ours, here in Richmond, BC.

At this point we are starting out slowly with 4 hours of unstructured time a sprint (every two weeks). Anyone using this needs to submit a project proposal and then do a demo of working code when they judge it’s advanced enough. The proposals can be pretty much anything vaguely related to business applications.

The goal is for people to work on things they are passionate about. To get a chance to play with new bleeding edge technologies before anyone else. To develop that function, program or feature that they’ve always thought would be great, but the business has always ignored. I’m really looking forward to what the team will come up with.

so-many-toys-so-little-unstructured-time-new-yorker-cartoon

We are still doing Hackathons, Ideajams and our regular innovation process. This is just another initiative to further drive innovation at Sage.

Crazy Projects at Google

Our unstructured time needs to be used for business applications, but I wonder what unstructured time is like at Google where they seem to come up with things that have nothing to do with search or advertising. Is it Google’s unstructured time that leads to self-driving cars, Google Glasses, military robots, human brain simulations or any of their many green projects. Hopefully these get turned into good things and aren’t just Google trying to create SkyNet for real. Maybe we’ll let our unstructured time go crazy as well?

Anathem

I’m a big fan of Neal Stephenson, and recently read his novel Anathem. Neal’s novels can be a bit off-putting since they are typically 1000 pages long, but I really enjoy them. One of the themes in Anathem are monasteries occupied by mathematicians that are divided up into groups by how often they report their results to the outside world. The lower order reports every year, next is a group that reports every ten years, then a group that reports every 100 years and finally the highest group that only reports every 1000 years. These groups don’t interact with anyone outside their order except for the week when they report and exchange information/literature with the outside world. This is in contrast to how we operate today where we are driven by “internet time” and have to produce results quickly and ignore anything that can’t be done quickly.

So imagine you could go away for a year to work on a project, or go away for ten years to work on something. Perhaps going away for 100 years or 1000 years might pose some other problems that the monks in the novel had to solve. The point being is to imagine what you could accomplish if you had that long? Would you use different research approaches and methods than we use typically today? Certainly an intriguing prospect contrasting where we currently need to produce something every few months.

My Project

So why am I talking about Anathem and unstructured time together? Well one problem we have is how do you get started on big projects with lots of risk? Suppose you know we need to do something, but doing it is hard and time consuming? Every journey has to start with the first step, but sometimes making that first step can be quite difficult. I’ve had the luxury of being able to do unstructured time for some time, because I’m a software architect and not embedded in an agile sprint team. So I see technologies that we need to adopt but they are large and won’t be on Product Manager’s road maps.

So I’ve done simple POC’s in the past like producing a mobile app using Argos. But more recently I embarked on producing a 64-Bit version of Sage 300. This worked out quite well and wasn’t too hard to get going. But then I got ambitious and decided to add Unicode into the mix. This is proving more difficult, but is progressing. The difficulty with these projects is that they involve changing a large amount of the existing code base and estimating how much work they are is very difficult. As I get a Unicode G/L going, it becomes easier to estimate, but I couldn’t have taken the first step on the project without using unstructured time.

Part of the problem is that we expect our Agile teams to accurately estimate their work and then rate them on how well they do this (that they are accountable for their estimates). This has the side effect that they are then very resistant to work on things that are open ended or hard to estimate. Generally for innovation to take hold, the performance management system needs a bit of tweaking to encourage innovation and higher risk tasks, rather than only encouraging meeting commitments and making good estimates.

Now unlike Anathem, I’m not going to get 100 years to do this or even 10 years. But 1 year doesn’t seem so bad.

Summary

Now that we are adding unstructured time to our arsenal of innovation initiatives, I have high hopes that we will see all sorts of innovative new products, technologies and services emerge out of the end. Of course we are just starting this process, so it will take a little while for things to get built.

Multi-Threading in Sage 300

with 6 comments

Introduction

In the early days of computing you could only run one program at a time on a PC. This meant if you wanted to run 10 programs at once you needed 10 computers. Then bit by bit multitasking made its way from mainframes and Unix to PCs, which allowed you to run quite a few programs at a time. Doing this meant you could run all 10 programs on one computer and this worked quite well. However it was still quite a high overhead since each program used a lot of memory and switching between them wasn’t all that fast. This lead to the idea of multi-threading where you ran very light weight tasks inside a single program. These used the same memory and resources as the program they were running in, so switching between them was very quick and the resources used adding more threads was very minimal.

Enter the Web

Think about how this affects you if you are building a web server. You want to basically run your programs on the web server and consider if you are running in the cloud. If you were single process then each web user running your app would have a separate VM to handle his requests and he would interact with that VM. There would be a load balancer that routes the users requests to the appropriate VM. This is quite an expensive way to run since you typically pay quite a bit a month for each VM. You might be surprised to learn that there are quite a few web applications that run this way. The reason they do this is for greater security since in this model each user is completely separated from each other since they are really running against separate machines.

The next level is to have the web server start a separate process to handle the requests for a given user. Basically when a new user signs on, a new process is started and all his requests are routed to this process. This model is typically used by applications that don’t want to support multi-threading or have other concerns. Again quite a few web applications run this way, but due to the high resource overhead of each process, you can only run at best a hundred or so users per server. Much better than one per VM, but still for the number of customers companies want to use their web site, this is quite expensive.

The next level of efficiency is to have each new user that signs on, just start a new thread. This is then way less overhead since you use only a small amount of thread local storage and switching between running threads is very quick. Now we are getting into have thousands of active users running off each web server.

tangled_threads_small1

This isn’t the whole story. The next step is to make your application stateless. This means that rather than each user getting their own thread, we put all the threads in a common pool. Then when a request for a user comes in, we just use a free thread from the pool to process the request. This way we don’t keep any state on the server for each user, and we only need the number of threads to be able to handle the number of active requests at a given time. This means while a user is thinking or reading a response, they are using no server resources. This is how you get a web applications like Facebook that can handle billions of users (of course they still use tens of thousands of servers to do this).

These techniques aren’t only done in the operating system software, but modern hardware architectures have been optimized for these techniques as well. Modern server CPUs have multiple cores which are very efficient at running multiple threads in parallel. To really take advantage of the power of these processors you really need to be a multi-threaded application.

Sage 300 ERP

As Sage 300 moves to the cloud, we have the same concerns. We’ve been properly multi-process since our 32-Bit version, back in the version 4 days (the 16-Bit version wasn’t really multi-process because 16-Bit Windows wasn’t properly multi-process).

We laid the foundations for multi-threaded operation in version 5.6A and then fully used it starting with version 6.0A for the Portal and Quote to Orders. Since then we’ve been improving our multi-threading as it is a very foundational component to being able to utilize our Business Logic Views from Web Applications.

If you look at a general text book on multi-threading it looks quite difficult since you are having to be very careful to protect the right memory at the right time. However a lot of times these books are looking at highly efficient parallel algorithms. Whereas we want a thread to handle a specific request for a specific user to completion. We never use multiple threads to handle a single request.

From an API point of view this means each thread has its own .Net session object and its own set of open Sage 300 Business Logic Views. We keep these cached in a pool to be checked out, but we never have more than one thread operating on one of these at a time. This then greatly simplifies how our multi-threading support needs to work.

If you’ve ever programmed our Business Logic Views, they have had the idea of being multi-threaded built into them from day 1. For instance all variables that need to be kept from call to call are stored associated with the view handle. There are no global variables in these Views. Further since even single threaded programs open multiple copies of the Views and use the recursively, a lot of this support has been fully tested since it’s required for these cases as well.

For version 5.6A we had to ensure that our API had thread safe alternatives for every API and that any API that wasn’t thread safe was deprecated. The sort of thing that causes threading problems is if an API function say just returns TRUE or FALSE on whether it succeeds and then if you want to know the real reason you need to check a global variable for the last error return code. The regular C runtime has a number of functions of this nature and we used to do this for our BCD processing. Alternatives to these functions were added to just return the error code. The reason the global variable is bad, is that another thread could call one of these functions and reset this variable in between you getting the failed response and then checking the variable.

State

If you’ve worked with our Views you will know that they are quite state-full. We can operate statelessly for simple operations like basic CRUD operations on simple objects. However for complicated data entry (like Order Entry or Invoice Entry) we do need to keep state while the user interacts with the document. This means we aren’t 100% stateless at this point, but our hope is that as we move forwards we can work to reduce the amount of state we keep, or reduce the number of interactions that require keeping state.

Testing Challenges

Fortunately testing tools are getting better and better. We can test using the Visual Studio Load Tester as well as using JMeter. Using these tools we can uncover various resource leaks, memory problems and deadlocks which occur when multiple threads go wrong. Static code analysis tools and good old fashioned code reviews are very useful in this regard as well.

Summary

As we progress the technology behind Sage 300, we need to make sure it has the foundations to run as a modern web application and our multi-threading support is key to this endeavor.

 

Written by smist08

February 22, 2014 at 5:37 pm

10 Questions for Sage Uncle Steve

with 3 comments

This is a guest blog posting by my wife, Cathalynn Labonté-Smith, though I’m the one answering the questions.

***

It may seem odd to readers to interview the man I’ve looked across the dinner table at for 29 plus years in his own blog, but we’ve had a recent addition to our household, Ian. Steve’s nephew is an enthusiastic young man who is in a programmer’s boot camp (see Steve’s Blog entry The Times They Are a Changin) and as an educator this has brought to my mind new questions for my darling husband beyond, “How was your day?” and “Will you be able to fit in a vacation around your business travel this year?” Also, he didn’t like my alternate idea of a Valentine to Computing.

We got out of the habit of talking about the details of Steve’s work since the time I worked as a technical writer in the field of wireless technology nearly a decade ago. For couples out there who both work in the same or related fields, you will know what I mean when I say it’s just best to unwind and avoid topics to do with work in the off hours.

When I left tech writing and became a teacher, occasionally I’d walk into a business class that was learning Accpac for Windows or Simply Accounting. Trained as an English teacher I’d do what all on-call teachers do when outside their subject area: stick to the lesson plan, get help from the brightest students in the class and muddle through as best I could. So it was fun to share those experiences with Steve and I actually learned a bit about the Sage products.

It’s been many years since I’ve been in the classroom, but having taught career preparation I want to know the following from Steve for programmers coming on stream. I know that Steve’s blog audience is unlikely to be junior programmers but I thought this might get his more senior executive readers thinking about what legacy they can pass along to new programmers.

Whoa, I can hear you say, what makes you think they can hear us with their ears jammed with ear buds and if they could we don’t speak their lingo? I’m not saying they’re going to sit through a PowerPoint of your ruminations and really the best example is modelling, after all, and as a teacher I found that it was an equal exchange. You can learn as much from your novice employees as they can learn from you–just about different things.

When I met Steve he was a Teacher’s Assistant in the Math Department at the University of British Columbia working on his Master’s Degree. His Math 100 class was just him, the blackboard, a huge lecture hall packed full of nervous first-years and a piece of chalk. I was never his student, no; I was on the other side of campus in Creative Writing workshops in poetry, fiction and children’s writing.

After his degree, he worked at various software companies in many different fields as a contractor, consultant or employee before finding his long-time home at Sage. Aside from having over twenty years at Sage now in his current role as Chief Architect, I’m curious as what Uncle Steve would say to Ian if he were around longer than it takes for him to gulp down his dinner and head upstairs for more studying?

1. Steve, what kind of guidance can you offer for formal programs a would-be programmer should choose for the best future employment and advancement? Can you compare it to your formal programming education?

A. I learned to program originally in Grade 11. Nowadays people have lots of opportunity to learn how to program at a young age. There are quite a few exceptional online programs where you can learn program, for example, Khan Academy. Khan Academy teaches you to program in JavaScript while creating fun drawings and animations. Programming like most skills requires practice to master. In the book Outliers, Malcolm Gladwell maintains that it takes 10,000 hours of practice to really master something, so starting early really helps.

My undergraduate and master’s degrees are in Mathematics and not Computer Science. However’ I took a few CS courses along the way (in things like Numerical Analysis and Operations Research), so strictly speaking I don’t have a formal CS background.

I was in the Co-op program at the University of Victoria so when I did graduate I had four work terms of job experience. Plus, I was always working on some sort of programming project on my trusty Apple II Plus computer (usually involving Fractals).

It doesn’t really matter so much which programming languages you learn, just learn a variety. After all, things are changing so fast these days that you need to expect to keep learning these as you progress through your career.

To summarize, you need something that will give you lots of practice programming, a few formal courses to give you credibility and you need to be a voracious reader.

2. In your undergraduate degree, you went through a co-op program. Is this something that you recommend and why? For example, does it make a programmer more desirable as a future employee?

A. Yes, absolutely. I think intern type programs are terrific ways to get job experience and references ready for that first real job. I did four co-op work terms and learned an awful lot about how various companies operate and what is involved. It is a great chance to get some experience with a variety of companies, perhaps a large one, a small one and a government one. I certainly give credit for co-op work terms when I’m hiring.

3. What kind of summer, part-time or volunteer work might add to and develop their skills?

A. I would look for something where you are giving back to the community, such as donating your time to a charity and if you have the chance to travel when you do this then even better. Again do something that interests you and you are passionate about.

4. What kind of advice can you give new programmers about how to pick their first employer?

A. Chances are you are going to have several jobs throughout your career. More than likely the pay will be similar, so go for something interesting. Do some research on the companies you are applying to and look beyond the initial job you will have there. Also, consider travelling to a new location for your first job to get a bit more experience of the world as well.

5. Just like some doctors are better at staying current on the latest treatments and research, how do programmers stay current when there seems to be so many new technologies and programming languages to learn. How do you manage to filter through all of it to get what will last and have future value? Or is it even critical that programmers do stay current or is there enough maintenance work to go around forever?

A.  I think the number one rule is to not rely on your employer for this. This is really your own professional responsibility. Employers will train you for what you need immediately but usually not for much else and not for things that they aren’t interested in.

One of the great things about the profession today is that most of the programming tools that are important are either open source or have free versions available (like Visual Studio Express). So you can dabble with all sorts of things in your spare time. All you really need is a computer and an Internet connection. I really believe in learning by doing. So pick something new and interesting and do a small project in it to see if you want to go deeper.

6. What are some common pitfalls new programmers could avoid in their early careers?

A. I think the most common pitfalls are either being too loyal to a company or giving up on a company too easily.

Often people in their career have very high and probably unrealistic expectations on how well a company is run. Often this gives rise to a lot of changing jobs after quick stints. This can be a mistake if you don’t get ahead and develop a resume with lots of short stays.

The reverse is the other common mistake—being in a job that doesn’t work, but trying to stick it out too long rather than cutting the cord. Leaving is often a hard decision to make, but is often easier earlier in your career. Finding the right compromise between these two extremes can be very difficult.

7. What is the most valuable lesson or lessons that you’ve learned throughout your career that you could share with a new programmer?

A. That things are often darkest before the dawn. On any project at some point things are going to look bad, problems look unsolvable, bugs are piling up and deadlines are being missed. The lesson here is not to take the whole world’s problems on your shoulders, but to just work through the problems one by one. Often these are difficult problems that take much more time than you would have thought, but sticking to this eventually yields the light at the end of the tunnel.

Another take on this is to remain optimistic in the face of adversity. Or follow the Hitchhiker’s Guide to the Galaxy’s main advice: Don’t Panic! (Their other advice of always carry a towel, I’m not so sure about).

don__t_panic_and_carry_a_towel

8. Who were your early role models?

A. Bill Gates and Steve Wozniak for what they did to start their companies. Steve Jobs for what he did when he returned to Apple.

9. Is there anything you would have done differently in your early career knowing what you know now?

A. There are always so many shoulda coulda wouldas. Now I know which companies back then paid the big bucks in stock options, but it’s hard to predict when looking forwards. I sometimes wonder if I should have moved from Vancouver, but then you get a beautiful day like today and just say “Nah”.

10. Is there a question that I didn’t ask that you wished I did?

A. No, this blog is already getting quite long J.

Point taken, Steve, this is a good place to wrap it up. Oh and, Happy Valentine’s Day, to you and to all your readers.

Written by smist08

February 15, 2014 at 4:34 pm

The Sage 300 System Manager Core DLLs

with 9 comments

Introduction

We hold a developer’s exchange (DevEx) every couple of weeks where one of our developers volunteers to present to all the other developers in our office. This past week I presented at the DevEx on what all the core DLLs in our Sage 300 runtime folder do. I thought this might be of interest for a wider audience so here are the gory details.

Architecture

Our marketing supplied architecture diagram is the following which highlights our three tiers and hide a lot of the details of how the object repository, APIs and supporting services are implemented. I’ve blogged previously on our Business Logic Views. In this article I’m going to go into more detail on all the DLLs that provide the framework to support all of this.

arch

Lower Level DLLs

If you are an ISV developing Sage 300 SDK applications or have worked for Sage on the 300 product then you will have had to encounter a number of these DLLs. I’m only looking at a subset of current DLLs, and I’m not looking at all the DLLs that support older technologies that are still present to maintain compatibility with add-ons.

lowleveldlls

I didn’t add arrows to this diagram since everything pretty well calls everything else below it. But segregated the DLLs a bit by how low or high level they are. So here is a quick synopsis of each one:

A4wcompat.dll: We created this DLL back when we did a native port of the Sage 300 Views for Linux. This DLL isolates operating system differences that need more than some clever #defines. A big part of this is the thread and process synchronization and locking support. Even though we never released the native Linux version, this isolation of operating system dependent parts had made adding multi-threading support, 64 bit support and Unicode support easier.

A4wmem32.dll: In 16 bit Windows, the built in memory management was really slow, so everyone used their own. Now this DLL uses the Windows and C default memory management, but is still important for global memory that needs to be shared across processes. Originally this was done through the data segment of a fixed DLL, but now is done through memory mapped files.

A4wlleng.dll: This is just a language DLL that holds some lower level error messages used by System manager.

A4wsqls.dll: This is the SQL Server database driver (there is also a4worcl.dll for Oracle and a4wbtrv.dll for Pervasive.SQL). This is dynamically loaded based on the type of database you are connecting to. For more on our database support see this article.

Cato3msk.dll, cato3dat.dll: The cato3 DLLs are the old CA common controls. We don’t use these in our UIs anymore, but cato3msk.dll provides our mask processing that is used by the Views. Similarly we don’t use this date control, but do use a routine here to format dates in error messages correctly.

A4wroto.dll: This handles the loading of the various View DLLs as well as the various UIs we’ve used in the past. It loads the roto.dat files and handles loading the right DLLs when View subclassing is going on or stub Views need to be used.

A4wsem.dll: This handles the locking of the semaphor.bin file. It allows processes to lock the company database, an application or the whole site. It also handles application specific cross workstation locking needs.

A4wrv.dll: This is the main DLL API entry point for the Views. It manages all the calling of the Views and handles other tasks like sending the calls for macro recording. For more on our View interfaces see this article.

A4wapi.dll: This is quite a hodge-podge of services for the Views like revision lists, error reporting and such. It also has support routines for the older CA-Realizer UIs. This is quite a big DLL and has most of our C level API in it.

A4wrpt.dll: This is our interface to Crystal Reports, it started as our interface to CA-RET then was converted to Crystal using their CRPE DLL interface, then converted to Crystal’s COM interface and now uses Crystal’s .Net Interface.

A4wprgt.dll: This DLL handles replicating the system database tables into the various company databases when needed.

A4wmtr.dll: This is our meter DLL for long running processes. It can either put up a meter dialog or just report back to the caller, the current status and percent complete. It also provides the API for cancelling long running processes.

Higher Level APIs

The next level are some of the DLLs that make up our Java, COM and .Net interfaces. There is a bit of complexity here due to how our previous web deployed system worked. Here we could communicate back to the server originally using DCOM and then later with .Net Remoting. The .Net Remoting layer provides both the communications layer for this web deployed mode and also acts as our .Net API. Depending on how you create your original session will configure which actual DLLs are used and which are calling conventions are used.

higherlevelapis

A4wapiShim.dll: This is the C side of our Java JNI layer. It talks to all the lower level DLLs to get its work done.

Sajava.jar: This is the Java side of our Java JNI interface. This allows Java programs to easily call Java classes to interface to our Business Logic Views. For more on this interface see this article.

A4wcomsv.dll: This is the main workhorse for the COM and .Net APIs. It does all the heavy lifting and interfacing to the core DLLs.

Accpac.Advantage.COMSVR.Interop.dll: This just performs the .Net to COM transition which is created by the MS tools.

Accpac.Advantage.Server.dll: Server side of the .Net API, handles the .Net Remoting requests if remotely called or just passes through otherwise.

Accpac.Advantage.Types.dll: Defines all the various types we use in our .Net API.

Accpac.Advantage.dll: This is the main external interface for our .Net API. For more on our .Net API see the series of articles starting with this one.

A4wcomexps.dll: Used when the VB UIs are going to talk .Net Remoting, this DLL is inside a4wcomex.cab.

A4wcomex.dll: The main entry point for the COM API.

Many More DLLs

There are many more DLLs in the Sage 300 runtime, but most of the others are for obsolete APIs  like the xapi, the older a4wcom COM API, the cmd API, the icmd API, etc. There are other important ones like to do with Database Setup, but these are the main ones used when you talk to the Business Logic through one of the main popular APIs.

Summary

For anyone interested this should give you a good idea of what the main DLLs in the runtime folder do. And give you an idea of how the various services in Sage 300 ERP are layered.

Written by smist08

February 8, 2014 at 5:19 pm

The Times They Are a-Changin’

with 7 comments

Introduction

Right now we have our nephew Ian living with us as he takes a Lighthouse Labs developer boot camp program in Ruby on Rails and Web Programming. This is a very intense course that has 8 weeks instruction and then a guaranteed internship of at least 4 weeks with a sponsoring company. A lot of this is an immersion in the current high tech culture that has developed in downtown Vancouver. This corresponds with myself working to expand the Sage 300 ERP development team in Richmond and our hiring efforts over the past several months. This article is then based on a few observations and experiences around these two happenings.

Sage 300 ERP has been around for over thirty years now and this has caused us to have quite a few generations of programmers all working on the product. Certainly over this time the various theories of what a high tech office should look like and what a talented programmer wants in a company has changed quite dramatically. As Sage moves forwards we need to change with the times and adopt a lot of these new ways of doing things and accommodate these new preferred lifestyles.

Generally people go through three phases of their career, starting single, no kids, renting to transitioning to married, home ownership and eventually kids to kids leaving home and considering retiring. Of course these days there can be some major career changes along the way as industries are disrupted and people need to retrain and reeducate themselves. Every office needs a good mix, to build a diverse, energetic and innovative culture, which has experience but is still willing to take risks.

Offices or No Offices

When I started with Accpac at Computer Associates, we were largely a cube farm perhaps not to dis-similar to the picture below.

cube_farm_9oy90q0wnv

The ambition was to have as much privacy as possible which usually translated to high cube walls, other barriers and the ambition to one day move into an office. At the time Microsoft advertised that on their campus every employee got an office, so they could concentrate and think to be more effective at their work. I visited the Excel team at this time and they had two buildings packed with lots of very small offices which led to long narrow claustrophobic hallways.

A lot has changed since then. Software development has much more adopted the Scrum/Agile model where people work together as a team and social interactions are very important. Further as products move to the cloud, the developers need to team up with DevOps and all sorts of other people that are crucial for their product’s success.

Now most firms adopt more open office approach. There are no permanent offices, everyone works together as a team.

High-Tech-City-2

There is a lot of debate about which is better. People used to more privacy of offices and cubes are loathe to lose that. People that have been using the open office approach can’t imagine moving back to cubes. Also with more people working a percentage of their time from home, a permanent spot at the offices doesn’t always make sense.

Downtown versus the Suburbs

When I started with CA the office was located in town near Granville Island. This was a great location, central, many good restaurants, and easily accessible via transit. Then we moved out to Richmond to a sprawling high tech park like many of the similar companies in the 90s. These were all sprawling landscapes of three story office buildings each one with a giant parking lot surrounding it. All very similar whether in Richmond, Irvine, Santa Clara or elsewhere.

Now the trend is reversing and people are moving back to downtown. Most new companies are located in or near downtown and several large companies have setup major development centers in town recently. Now the high tech parks in the suburbs are starting to have quite a few vacancies.

The Younger Generation

A lot of this is being driven by the twenty-something generation. What they look for in a company is quite different today than what I looked for when I started out. There are quite a few demographic changes as well as lifestyle changes that are driving this. A few key driving factors are:

  • The number of young people getting drivers licenses and buying cars is shrinking. There are a lot of reasons for this. But people who can’t drive have trouble getting to the suburbs.
  • People are having children later in life. Often putting it off until their late thirties or even forties.
  • City cores are being re-vitalized. Even Calgary and Edmonton are trying to get urban sprawl under control.
  • Real estate in the desirable high tech centers like San Francisco, Seattle or Vancouver is extremely expensive. Loft apartments downtown are often the way to go.
  • Much more work is done at home and if coffee shops.

This all makes living and working downtown much more preferable. It is also leading to people requiring less space and looking for more social interactions.

Hiring that Younger Generation

To remain competitive a company like Sage needs to be able to hire younger people just finishing their education. We need the infusion of youth, energy and new ideas. If a company doesn’t get this then it will die. Right now the hiring market is very competitive. There is a lot of venture capital investment creating hot new companies, many existing companies are experiencing good growth and generally the percentage of the economy driven by high tech is growing. Another problem is that industries like construction, mining and oil are booming, often hiring people at very high wages before they even think about post-secondary education.

What we are finding is that many young people don’t have cars, live downtown and are looking to work in a cool open office concept building.

We are in the process of converting our offices to a more modern open office environment. We do allow people to work at home some days. Maybe we will even be able to move back downtown once the current lease expires? Or maybe we will need to create a satellite office downtown.

Generally we have to become more involved with both the educational institutions by hiring co-op students and other interns. We need to participate in more activities of the local developer and educational community like the HTML500. We need to ensure that Sage is known to the students and that they consider it a good career path to embark on. Often hiring co-op students now can lead to regular full time employees later.

Since Sage has been around for a long time and has a large solid customer base, we offer a stable work environment. You know you will receive your next pay check. Many startups run out of funding or otherwise go broke. Often while the job market is hot, young people don’t worry about this too much, but as you get into a mortgage, this can become more important.

Summary

The times are changing and not only do our developers need to keep retraining and learning how to do things differently, but so do our facilities departments, IS departments and HR departments. Change is often scary, but it is also exciting and stops life from becoming boring.

Personally, I would much rather work downtown (I already live there). I think I will be sad when I give up my office, but at the same time I don’t want to become the stereotypical old person yelling at the teenagers to get off my lawn. Overall I think I will prefer a more mobile way of working, not so tied to my particular current office.

 

Written by smist08

February 1, 2014 at 5:50 pm

Branching by Feature

with 7 comments

Introduction

Now that we’ve released Sage 300 Online as well as the on-premise Sage 300 ERP 2014, we need to change the way we develop features going forwards. We would like to develop features and frequently deploy these to Sage 300 Online so that customers can take advantage of these as soon as possible. Plus we would like to start including more features in our Product Updates.

This means that we have two separate products that would like to release features out of the same source code on their own timelines.

This article describes some of the issues with doing this and describes some aspects of the procedures we are following. In some cases there are additional benefits and in some cases there is additional work.

Operating SaaSy

Generally we are switching from releasing big bang product versions every year or so, to releasing updates on a much frequent basis. I’ve blogged about this before in these articles: SaaSifying Sage and SaaSifying Accpac. This article is really just covering one aspect of this process, namely around how we now use our source control system. This is one aspect of a continuous deployment pipeline. It also involves the DepOps team.

Doneness Criteria

A key ingredient into making this all work is having good “Doneness Criteria”, so that when agile stories are completed they are fully done, including all testing, automation, documentation, etc. So that when a “done” story is included in the release it really works. The closer you can come to this the better. If you aren’t very close then you are going to need a lot of integration/regression testing steps between these minor small feature releases to ensure the quality of the product. An important aspect of this is to have good automated tests to find integration problems and reduce manual regression testing.

Source Code Branching

Source code control systems have the concept of branching, where you can create a branch for a feature and then go through the agile process developing the feature and when you are all done then you merge the branch back into the product. This is a very simple branching strategy which works great if you are developing one feature at a time. However in reality there is a lot of work going on simultaneously including multiple features being developed at once, sustainment work (bug fixing) and infrastructure work (like the 64 Bit version).

Modern source control systems like Subversion and Git provide very powerful features to easily create branches and to incorporate changes from other branches and finally merge branches back together. We’ve used Subversion for some time now and have a lot of source code and history stored here. But for new projects we’ve been creating them in Git, partly due to the more powerful branching features.

To work effectively there are usually quite a few branches, but hopefully not as many as on the fractal tree below.

fractal-tree2

So what sort of branches do we need? Potentially we could create a lot of branches in the Source Code. The main root of all the branches is called the trunk and we always want the trunk to be in a releasable state. Perhaps we can have a development branch, and a regression branch. Then from the development branch we fork off the individual features as separate branches. Then when we are happy with them we merge them into the regression branch for more testing and when this is competed they are merged into the trunk.

Generally this works well for one product, say a web product where the DevOps team controls what gets merged from the regression branch into the trunk. However when you have multiple products derived from the same source tree this can be quite complicated.

Another danger of too much branching is that the longer a branch lives, the further it can get from the trunk due to work done on other branches and then merging back into a main branch and the trunk gets more difficult. This can be alleviated by merging changes done elsewhere into your branch, so you have smaller merges along the way rather than a bit difficult merge at the end. Another advantage we have in Sage 300, is that it’s a very large product consisting of hundreds of DLLs, OCXs and EXEs. As a result different teams are often operating in quite different areas of the source code. However there will still be points of contention, one that is happening right now is multiple features being added to the reporting engine that is creating contention on that source code module.

So for a branching strategy we need to strike a compromise between keeping everything perfectly isolated and having a lot of gates for features to pass through versus keeping the management of the system down and reducing the difficulty in merging branches.

So our approach is to limit the number of branches. Trunk is the main branch that is released to production (i.e. customers). Nothing is developed directly on the trunk, everything needs to be developed on a feature branch. Before merging back into trunk, a feature must merge the trunk back into the feature first and the team developing the feature must do some testing (including running our full automated testing suite). Then the feature can be merged back into the trunk and the feature is available for either the Online or on-premise products to consume.

Consequences

Of course this has consequences on other components in the build pipeline. Now rather than just build the trunk with our set of build servers, we have to build all these branches. As part of the build we run a number of unit tests, but we also have a large set of automated tests that run on a separate set of servers, and now we want to run these automated tests on the branches.

Generally this increases the logistics of maintaining all these sets of servers. The build servers need to be configured on what branch to run and then the output of that build needs to be fed into the automated test servers to run all the tests.

Generally this all improves the quality and keeps the trunk in a releasable state since a lot of testing has been done before the feature is merged into the trunk.

For newer components like the Sage 300 Online home page and the Sage 300 ERP provisioning engine, these are built entirely using ASP.Net and then built and deployed using TeamCity. This simplifies the whole process and we can use a more sophisticated branching strategy in conjunction with DevOps where they have a release branch which only they control where they can have complete control on what they merge, build and deploy.

Summary

We’ve been putting this infrastructure in place for some time now and have it operating fairly smoothly. Now we will really start putting it into practice by releasing features frequently on two product lines. The main test that we’ve done it right is that it’s seamless to customers because we’ve been able to maintain high quality with all these processes and technologies in place.

Moving on to Unicode

with 7 comments

Introduction

My first computer was an Apple II Plus, which didn’t even support lower case characters. Everything was upper case. To do word processing you used special characters to change case. Now we expect our computer to not just handle upper and lower case characters, but accented characters, special symbols, all the Asian language characters, all the Arabic characters and everything else.

In the beginning there was ASCII which allowed computers to encode the alphabet, numbers and the common typewriter characters, all 127 of them. Then we added another 127 characters for accented characters. But there were quite a few different accented characters so we had a standard first 127 characters and then various options for the upper 127 characters. This allowed us to handle most European languages on computers. Then there was the desire to support Chinese characters which number in the tens of thousands. So the idea came along to represent these as two bytes or 16 bits. This worked well, but it still only supported one language at a time and often ran out of characters. In developing this there were quite a few standards and quite a bit of incompatibility of moving files containing these between computers systems. But generally the first 127 characters were the original ASCII characters and then the rest depended on the code page you chose.

unicode

To try to bring some order to this mess and make the whole problem easier, Unicode was invented. The idea here was to have one character set that contained all the characters from all the languages in the world. Sounds like a good idea, but of course Computer Scientists underestimated the problem. They assumed this would be at most 64K characters and that they could use 2 bytes to represent each character. Like the 640K memory barrier, this turned out to be quite a bad idea. In fact there are now about 110,000 Unicode characters and the number is growing.

unicode1

Unicode specifies all the characters, but it allows for different encodings. These days the two most common are UTF8 and UTF16. Both of these have pros and cons. Microsoft chose UTF16 for all their systems. Since I work with Sage 300 and since we are trying to solve this on Windows that is what we will discuss in this article. To convert Sage 300 to Unicode using UTF8 would probably have been easier since UTF8 was designed to give better compatibility with ASCII, but we live in a Windows UTF16 world where we want to interact well with SQL Server and the Windows API.

Microsoft adopted UTF16 because they felt it would be easier, since basically each string became twice as long since every character was represented by 2 bytes. Hence memory doubled everywhere and it was simple to convert. This was fine except that 2 bytes doesn’t hold every Unicode character anymore, so some characters actually take 2 16-byte slots. But generally you can mostly predict the number of characters in a given amount of memory. It also lends itself better to just using array operations rather than having to go through strings with next/previous operations.

Compatibility

Windows took the approach that to maintain compatibility they would offer two APIs, one for ANSI and one for Unicode. So any Windows API call that takes a string as a parameter will have two versions, one ending A (for ANSI) and one ending in W (for Wide). Then in Windows.h if you compile with UNICODE defined then it uses the W version, else it uses the A version. This certainly adds a lot of pollution to the Windows API. But they maintained compatibility with all pre-existing programs. This was all put in place as part of Win32 (since recompiling was necessary).

For Sage 300 we’ve resisted going all in on Unicode, because we don’t want to double the size of our API and maintain that for all time, and if we do release a Unicode version then it will break every third party add-in and customization out there. We have the additional challenge that Unicode doesn’t work very well in VB6.

But with our 64 Bit version, we are not supporting VB6 (which will never be 64Bit) and all third parties have to make changes for 64 Bit anyway, so why not take advantage of this and introduce Unicode at the same time? This would make the move to 64 Bit more work, but hopefully will be worth it.

Why Switch to Unicode

Converting a large C/C++ application to Unicode is a lot of work. Why go to the effort? Sage 300 has had a traditional and simplified Chinese versions for a long time. What benefits does Unicode give us over the current double byte system we support?

One is that in double byte, only one character set can be installed on Windows at a time. This means for our online version we need separate servers to host the Chinese version. With Unicode we can support all languages from one set of servers, we don’t need separate sets of servers for each language group. This makes managing the online server farm much easier and much more uniform for upgrading and such. Besides our online offerings, we have had customers complain that when running Terminal Server they need separate ones for various branch offices in different parts of the world using different languages.

Another advantage that now we can support mixtures of script, so users can enter Thai in one field, Arabic in another and Chinese in another. Perhaps a bit esoteric, but it could have uses for optional fields where there are different ones for different locales.

Another problem we tend to have is with sort orders in all these different incompatible multi-byte character systems. With Unicode this becomes much more uniform (although there are still multiple of these) and much easier to deal with. Right now we avoid the problem by limiting key fields to upper case alphanumeric. But perhaps down the road with Unicode we can relax this.

A bit advantage is ease of setup. Getting the current multi-byte systems working requires some care in setting up the Windows server that often challenges people and causes problems. With Unicode, things are already setup correctly so this is much less of a problem.

Converting Sage 300

SQL Server already supports Unicode. Any UI technology newer than VB6 will also support Unicode. So that leaves our Business Logic layer, database driver and supporting DLLs. These are all written in C/C++ and so have to be converted to Unicode.

We still need to maintain our 32-Bit non-Unicode version and we don’t want two sets of source code, so we want to do this in such a way that we can compile the code either way and it will work correctly.

At the lower levels we have to use Microsoft’s tchar.h file which provides defines that will compile one way when _UNICODE is defined and another when it isn’t. This is similar to how Windows.h works for the Windows runtime, only it does it for the C runtime. For C++ you need a little extra for the string class, but we can handle that in plustype.h.

One annoying thing is that to specify a Unicode string in C, you do l”abc”, and with the macro in tchar.h, you change it to _T(“abc”). Changing all the strings in the system this way is certainly a real pain. Especially since 99.99% of these will never contain a non-ASCII character because they are for debugging or logging. If Microsoft had adopted UTF8 this wouldn’t have been necessary since the ASCII characters are the same, but with UTF16 this, to me is the big downside. But then it’s pretty mechanical work and a lot of it can be automated.

At higher levels of Sage 300, we rely more on the types defined in plustype.h and tend to use routines form a4wapi.dll rather than using the C runtime directly. This is good, since we can change these places to compile for either and hide a lot of the details from the application programmer. The other is that we only need to convert the parts of the system that deal with the database and the parts that deal with string handling (like error messages).

One question that comes up is what will be the length on fields in the database? Right now if it’s 60 characters then its 60 bytes. Under this method of converting the application the field will be 60 UTF16 characters for 120 bytes. (This is true if you don’t use the special characters that require 4 bytes, but most characters are in the standard 64K block).

Summary

Moving to both 64 Bits and Unicode is quite an exciting prospect. It will open up the doors to all sorts of advanced features, and really move our application ahead in a major way. It will revitalize the C/C++ code base and allow some quite powerful capabilities.

As a usual disclaimer, this article is about some research and proof of concept work we are doing and doesn’t represent a commitment as to which future version or edition this will surface in.

 

Sage 300 ERP Optional Fields

with 4 comments

Introduction

Optional Fields were added to Sage 300 ERP as the major feature for version 5.3A. They are a great way to add custom data fields to many master and transaction screens in Sage 300. They also have the benefit that we can flow them with the transactions through the system, for instance from an O/E Order to the corresponding A/R Invoice to the G/L Batch. This opens up a lot of power for tracking extra data in the system and to do sophisticated reporting based on it.

In this article we will look at some of the programming consideration when dealing with Optional Fields at the API level. We’ll use the Sage 300 .Net API to explore how to deal with a few things in that come up with Optional Fields.

Many programmers feel they can ignore Optional Fields and their program works perfectly for sample data, but as soon as they install it at a customer site, it fails. A common cause of this is that the customer has required optional fields that cannot be ignored and the API program needs to be updated. (Another common failure at customer sites is caused by not dealing with locked fiscal periods).

optionalfields

Ordered Header/Detail

From my previous article on the View Protocols, any optional field view is an ordered detail where its header is whatever it is an optional field for. An ordered header/detail means that you set the value of the key. But you do have to be careful to setup your optional fields correctly in the application’s setup UI since there is validation on these.

In the sample program I’ll give a complete set of steps. In some situations some of the steps can be skipped, for instance you don’t really need to call RecordClear and RecordGenerate for the header optional fields, but I’ll leave them in since sometimes you do, and it’s easier to just use a formula that always works rather than re-thinking everything each time.

Revision Lists

Since Optional Fields are often sub-details of details that are stored in revision lists, you sometimes need to be careful that things exist before using them. Revision Lists are our mechanism to store things in memory before they are written to the database in a single database transaction. So until you save the header, nothing is in the database and everything is in memory. The Revision List store the list of details that have been manipulated so far. The Optional Fields are themselves stored in Revision Lists until the big database transaction happens.

In the sample program we insert the detail before adding this optional fields. Until the detail is inserted there is nothing to attach the optional fields to, so we need to do that first. The Insert operation only adds the detail record to a revision list in memory, but once that is done we can add sub-details (ie Optional Fields) to it (which are also stored in memory).

Sample Program

For a sample program, I modified the ASP.Net MVC sample ARInvEntry to save a couple of optional fields for the header and a couple of optional fields for the detail. Below is a subset of the code to insert the detail optional fields:

   // 9. Insert detail.
   arInvoiceDetail.Insert();   // Insert the detail line (only in memory at this point).

   // 10. Add a couple of detail optional fields.
   arInvoiceDetailOptFields.RecordClear();
   arInvoiceDetailOptFields.RecordGenerate(false);
   arInvoiceDetailOptFields.Fields.FieldByName("OPTFIELD").SetValue("EXTWARRANTY", false);
   arInvoiceDetailOptFields.Fields.FieldByName("VALIFBOOL").SetValue(true, false);
   arInvoiceDetailOptFields.Insert();
   arInvoiceDetailOptFields.RecordClear();
   arInvoiceDetailOptFields.RecordGenerate(false);
   arInvoiceDetailOptFields.Fields.FieldByName("OPTFIELD").SetValue("WARRANTYPRD", false);
   arInvoiceDetailOptFields.Fields.FieldByName("VALIFTEXT").SetValue("180 Days", false);
   arInvoiceDetailOptFields.Insert();

   // 10.5 Register the changes for the detail.
   arInvoiceDetail.Update();

   // 11. Insert header. (This will do a Post of the details.)             
   arInvoiceHeader.Insert();

Different Field Types

If you look in the database you will see that the Optional Field tables don’t hold that many fields, but any Optional Field View has quite a few fields, but many are calculated. Optional Fields can be all sorts of different types, but in the database all the values are stored in a single VALUE field regardless. So there has to be a conversion to/from this text field and the real type. This is the job of all the VALIFtype fields. Basically you use the VALIF field based on the type of the Optional Field and then the View will handle the conversion to and from the type as stored in the VALUE database field. That is why we used the fields VALIFTEXT and VALIFBOOL above.

Auto Insert

In the sample program I just inserted the optional fields myself. However there is another way. Views that have optional fields usually have a field called PROCESSCMD (or something similar) where you can set a value with a title like “Insert Optional Fields” which will insert the optional fields that are needed. You can set this field and call Process on the View and it will insert these optional fields for you. Then you can read the optional field, set its value and update it. Some people find this an easier way to do things and you get all the optional fields with their default values as a bonus. (Note that this only applies to Optional Fields that are set to auto insert in the applications Optional Fields setup screen).

Similarly for Views that will transfer in Optional Fields (like from an Order to a Shipment), you will see PROCESSCMD’s like “Default and Transfer Optional Fields”. If you are doing API programming and want to preserve the flow of Optional Fields, you will occasionally have to set one of these and call Process.

When dealing with Optional Fields, it’s worth checking out the PROCESSCMD functions of the main View to see if they will help you do your job and save you a fair bit of coding.

Lookup Values

Many optional fields have a list of valid values. If you don’t set it as one of these you will get an error message to this effect. When dealing with optional fields make sure you have an error handler to show the errors after an exception as explained here. If you want to get at these values they are in CSOPTFD CS0012 (a detail of CSOPTFH CS0011). You can read through these views like any other as explained here.

Summary

Optional Fields are a powerful feature in Sage 300 ERP. Many customers use these in a fundamental way to support their businesses. This means that developers working in the Sage 300 world have to be cognizant of these and make sure their programs are compatible with their use.

Written by smist08

January 4, 2014 at 3:54 am

Follow

Get every new post delivered to your Inbox.

Join 245 other followers