Posts Tagged ‘sage 300’
Synchronizing Data with Sage 300
Introduction
Often there is a need to synchronize data from an external source with Sage 300 data. For instance, with Sage CRM we want to keep the list of customers synchronized between the two systems. This way if you enter a new customer in one, it gets automatically added to the other. Similarly, if you update say an address in one, then it is updated in the other. Along the way there have been various techniques to accomplish this. In this blog post I’ll cover how this has been done in the past and present a new idea on how to do this now.
In the past there have been a number of constraints, such as supporting multiple databases like Pervasive.SQL, SQL Server, IBM DB2 and Oracle. But today Sage 300 only supports SQL Server. Similarly, some suggested approaches would be quite expensive to implement inside Sage 300. Tied closely to synchronization is the desire by some customers for database auditing which we will also touch upon since this is closely related.
Sage CRM Synchronization
The first two-way data synchronization we did was the original Sage 300 to Sage CRM integration. The original integration used sub-classed views to capture changes to the data and to then use the Sage CRM API to make the matching change to Sage CRM. Sage CRM then did something similar and would write the change back to Sage 300 via one of its APIs.
The main problem with this integration technique is that its fairly brittle. You can configure the integration to either fail, warn or ignore errors in the other system. If you select error, then both systems need to be running in order for any one to use either. So if Sage CRM is offline, then so is Sage 300. If you select warn or ignore then the record will be updated in one system and not the other. This will put the databases out of sync and a manual full re-sync will need to be performed.
For the most part this system works pretty well, but isn’t ideal due to the trade off of either requiring both systems always be up, or having to run manual re-syncs every now and then. The integration is now built into the Sage 300 business logic; so sub-classed Views are no longer used.
The Sage Data Cloud
The intent of the Sage data cloud was to synchronize data with the cloud but not to require the on premise accounting system be always online. As a consequence, it couldn’t use the same approach as the original Sage CRM integration. In the meantime, Sage CRM added support for vector clock synchronization via SData. The problem with SData synchronization was that it was too expensive to retrofit that into all the accounting packages that needed to work with the Sage Data Cloud.
The approach the Sage Data Cloud connector took was to keep a table that matched the accounting data in a separate database. This table just had the key and a checksum so it could tell what changed in the database by scanning it and re-computing the checksums and if they didn’t match then the record had been modified and needed synching.
This approach didn’t require manual re-synchs or require both systems be online to work. However, it was expensive to keep scanning the database looking for changes, so they may not be reflected terribly quickly or would add unnecessary load to the database server.
What Does Synchronization Need?
The question is then what does a modern synchronization algorithm like vector clock sync require to operate efficiently? It requires the ability to ask the system what has changed since it last ran. This query has to be efficient and reliable.
You could do a CSQRY call and select based on the audit stamps that are newer than our last clock tick (sync time). However, the audit stamp isn’t and index and this query will be slow on larger tables. Further it doesn’t easily give you inserted or deleted records.
Another suggested approach would be to implement database auditing on the Sage 300 application’s tables. Then you get a database audit feature and if done right, you can use this to query for changed records and then base a synchronization algorithm on it. However, this has never been done since it’s a fairly large job and the ROI was never deemed worthwhile.
Another approach that is specific to SQL Server would be to query the database transaction logs. These will tell you what happened in the database. This has a couple of problems namely the queries on the transaction logs aren’t oriented around since the last sync, and so are either slow or return too much information. Further SQL Server manages these logs fairly aggressively so if your synchronization app was offline for too long, SQL Server would recycle the logs and the data wouldn’t be available anymore. Plus, this would force everyone to manage logs, rather than just have them truncated on checkpoint.
SQL Server 2008 to the Rescue
Fortunately, SQL Sever 2008 added some nice change tracking/auditing functionality that does what we need. And fortunately Sage 300 only supports SQL Server so we can use this functionality. There is a very good article on MSDN about this and how it applies to synchronization here. Basically the SQL Server team recognized that both data synchronization and auditing were important and quite time consuming to add at the application level.
Using this functionality is quite simple, you need to turn on change tracking for the database, then you need to turn on change tracking for each table you want to track changes for.
Then there is a SQL function that you can select from to get the data. For instance, I updated a couple of records in ARCUS and then inserted a new one and the result is shown.
This is giving me the minimal information, which is all I require for synchronization since I really only need to know which records to synchronize and then can get the full information form the main database table.
If you want to use this to audit all the changes in your database, then there are more options you can set to give you more complete information on what happened.
Summary
If you are writing an application that needs to synchronize data with Sage 300 (either one way or two way), consider using these features of SQL Server since you can add them externally to Sage 300 without affecting the application.
Similarly, if you are writing a database logging/auditing application you might want to look at what Microsoft has been adding to SQL Server starting with version 2008.
Passing the Torch
Introduction
As many people already know, I’ve now retired after 23+ years with Computer Associates/Sage working on the Accpac/Sage 300 product line. I’m now happily living in Gibsons, BC which is a short 40 minute ferry ride from Vancouver.
John Thomas (aka JT)
John Thomas will be taking over for me as the Chief Architect for the Sage 300 product line. He will also be taking over my role as Sage 300’s main blog writer. His blog is on WordPress here: https://jthomas903.wordpress.com/. Follow this blog to keep up to date on Sage 300.
It will also be posted on Sage City here: http://sagecity.na.sage.com/support_communities/sage300_erp/b/sage_300_erp_r_and_d/archive/2016/03/21/sage-300c-transition. He will continue to post articles to the Sage 300 In Development blog area on Sage City as well as on WordPress.
Check out his first blog posting where he introduces himself and you can learn a little about him. Introduce yourself via the comments section. All bloggers always appreciate any suggestions on topics for future articles.
My Blog
I’ll continue writing my blog, but it probably won’t be on Sage 300 anymore. A lot of my blogs relied on having the support and expertise of the team around me to help out. I really appreciate all the help and support I’ve received over the years from everyone at Sage as well as all the people in the wider Sage 300 community.
Of course I won’t delete my blog, or delete any articles. They will remain on WordPress as long as WordPress keeps hosting them. They will also remain on Sage City where I have always mirrored them. I’ll probably continue to fix typos or make corrections to any errors that I notice or are pointed out to me.
Also feel free to keep asking questions, but beware that I can’t go down the hall to ask an expert on a topic and I can’t consult the source code anymore. But I’ll still do my best to answer, though the answer may be to go ask tech support.
I am very tempted to finish a few projects I never had time for at Sage, like investigating tying Azure Machine Learning (or perhaps Google’s or Amazon’s machine learning) to Sage 300. Or perhaps using Azure Logic App (or Azure App Service) to create workflows around Sage 300. Or perhaps do a POC with Sage 300 and Microsoft’s Power BI. But I think I’ll leave those projects to others for now. I do have a few opinions and bias’s on Accounting Software, Software Development, the Cloud and future trends that I may still blog on in the future.
Chances are what I will be blogging on will be more around my other interests that I’ve been getting back to now that I’m retired. These would include:
- Photography – including via DLSR and via my Drone.
- Guitar – now that I have time to practice again.
- Triathlon – running, swimming and biking.
- Hiking – there’s great hiking here on BC’s Sunshine Coast.
- Video Games – I’ve been playing with the Amazon Lumberyard game engine.
- Artificial Intelligence – I’ve been reading quite a bit about this lately and there are some amazing advances currently in the works. For instance, Google’s AI just beat the world Go champion.
- Travel – I do plan to do a lot more travelling and suspect I’ll be blogging about it.
The nice thing about being retired is that I can pursue a lot of diverse interests. So who knows where these will lead over the coming months.
Summary
As I move on to the next phase of my life, so will my blog. But I have full confidence in JT and the Sage 300 team to carry the torch forward. I’m eagerly waiting to see JT’s future blog postings and see the various press releases as Sage 300 continues to evolve.
Sage Connect 2016
Introduction
The Sage Connect 2016 conference has just wrapped up in Sydney, Australia. I was very happy to be able to head over there and give a one-day training class on our new Web UIs SDK, and then give a few sessions in the main conferences. This year the conference combined all the Sage Australia/New Zealand/Pacific Islands products into one show. So there were customers and partners from Sage HandiSoft, Sage MicrOpay, Sage One as well as the usual people from Sage CRM, Sage 300, Sage CRE and Sage X3.
The show was on for two days where the first day was for customers and partners and then the second day was for partners only. As a result, the first day had around 600 people in attendance. There was a networking event for everyone at the end of the first day and then a gala awards dinner for the partners after the second day.
A notable part of the keynote was the kick-off of the Sage Foundation in Australia with a sponsorship of Orange Sky Laundry. Certainly a worthwhile cause that is doing a lot of good work helping Australia’s homeless population.
There was a leadership forum featuring three prominent Australian entrepreneurs discussing their careers and providing advice based on their experience. These were Naomi Simpson of Red Balloon, Brad Smith of Braaap Motorcycles and Steve Vamos of Telstra. I found Brad Smith especially interesting as he created a motorcycle manufacturer from scratch.
The event was held at the conference center at the Australian Technology Park. This was very interesting since it was converted from the Eveleigh Railway Workshops and still contains many exhibits and equipment from that era. It created an interesting contrast of 2016 era high tech to the heavy industry that was high tech around 1900.
Sage 300
The big news for Sage 300 was the continued roll out of our Web UIs. With the Sage 300 2016.1 release just being rolled out this adds the I/C, O/E and P/O screens along with quite a few other screens and quite a few other enhancements. Jaqueline Li, the Product Manager for Sage 300 was also at the show and presented the roadmap for what customers and partners can expect in the next release as well.
Sage is big on promoting the golden triangle of Accounting, Payments and Payroll. In Australia this is represented by Sage 300, Sage Payment Solutions and Sage MicrOPay which all integrate to complete the triangle for the customers. Sage Payment Solutions (SPS) is the same one as in North American and now operates in the USA, Canada and Australia.
Don Thomson one of the original founders of Accpac and the developer of the Access-C compiler was present representing his current venture TaiRox. Here he is being interviewed by Mike Lorge, the Managing Director Sage Business Solutions, on the direction of Sage 300 during one of the keynote sessions.
Development Partners
Sage 300 has a large community of ISVs that provide specialized vertical Accounting modules, reporting tools, utilities and customized solutions. These solutions have been instrumental in making Sage 300 a successful product and a successful platform for business applications. Without these company’s relentless passionate support, Sage 300 wouldn’t have anywhere near the market share it has today.
There were quite a few exhibiting at the Connect conference as well as providing pre-conference training and conference sessions. Some of the participants were: Altec, Accu-Dart, AutoSimply, BSP Software, Dingosoft, Enabling, Greytrix, HighJump, InfoCentral, Orchid, Pacific Technologies, Symphony, TaiRox and Technisoft.
I gave a pre-conference SDK training class on our new Web UIs, so hopefully we will be seeing some Web versions of these products shortly.
Summary
It’s a long flight from Vancouver to Sydney, but at least it’s a direct flight. The time zone difference is 19 hours ahead, so you feel it as 5 hours back which isn’t too bad. Going from Canadian winter to Australian summer is always enjoyable to get some sunshine and feel the warmth. Sydney was hopping with tourist season in full swing, multiple cruise ships docked in the harbor, Chinese new year celebrations in full swing and all sorts of other events going on.
The conference went really well, and was exciting and energizing. Hopefully everyone learned something and became more excited about what we have today and what is coming down the road.
Of course you can’t visit Australia without going to the beach, so here is one last photo, in this case of Bondi Beach. Surf’s up!
Sage 300c Web Services
Introduction
Hand in hand with true HTML/JavaScript/CSS Web UI’s you also want to access the same logic from other general programs using RESTful Web Services. This gives a general API to access the application which doesn’t require any Sage 300 components be installed on the client computer and doesn’t require the calling application be on the same computer or even at the same location.
ASP.Net MVC Web screens tend to have quite quickly changing interfaces between the Views, Controllers and Models which makes using then same Web Services as the UI a bit problematic, especially as the screens evolve quickly. You want a stable Web Services interface that preserves compatibility from version to version and provides a wider more general interface. At the same time the developer of an ASP.Net MVC program doesn’t want to do a completely different implementation for exposing Web Services.
The way ASP.Net MVC solves this dilemma is by allowing you to add a Web Services stack on top of your existing models (which in our case means fully leveraging the business repositories and Sage 300 Business Logic as well). But it uses a custom controller to handle the Web Services requests. In the Microsoft stack there are several supported standards for Web Services, but the one we used is OData. This means that using our Web Services you can do all the standard OData queries and supports the standard OData meta-data.
With our Sage 300 2016 Product Update 1 we have included a number of Web Services in the product. These are automatically installed if you select the Web UIs option from the main installation. So if the Web UIs are up and running then you can try playing with the Web Services. In this article we’ll show how to get started using these. Over next couple of releases, we’ll be fleshing these out to support all the Business logic as well as services beyond the basic CRUD operations.
Some Examples
If you type:
https://yourservername/Sage300webapi/sdata/-/SAMLTD/GL/Accounts
into the Chrome browser you will be prompted for your Sage 300 login credentials, which you can enter. Note that from this browser prompt the password is case sensitive, so you need to uppercase your normal Sage 300 password (since our regular login screen normally does this).
Then after entering the correct data you will get back a JSON object with all the information in your chart of Accounts (including details like optional fields):
Working with the Browser directly, although fun, will soon become tedious. Another easier approach is to install the Chrome add-in PostMan which will remember your Web Services so you can adjust and repeat them. You need to set the Basic Authorization header with your Sage 300 login and password. Below we use the shortened URL to get the list of all the available feeds for SAMLTD with the URL:
https://yourservername/Sage300webapi/sdata/-/SAMLTD
using PostMan:
And we get the returned JSON object containing the list of Web Services we support. It by company since not all accounting application may be activated in the database.
Queries
You can do standard OData queries to filter the returned data. For instance:
https://yourservername/Sage300webapi/sdata/-/SAMLTD/GL/Accounts?$filter=UnformattedAccount eq ‘1020’
will result in just this one account being returned:
The way we implement queries is via adding LINQ support that will convert the LINQ query to a Browse filter for our Sage 300 View. This means we will support any query as long as we can translated it into a Browse filter. If the filter contains a SQL function we don’t support, then you will get back a not supported error for your query. Note that often people writing code for the regular Web UIs just use our C# LINQ support to browse/fetch rather than calling browse/fetch directly since this lets you leverage other advanced features in C# and .Net.
Other Clauses
You can specify a sort order as long as what you requests matches an index in the Sage 300 database:
https://yourservername/Sage300webapi/sdata/-/SAMLTD/GL/Accounts?$orderby=UnformattedAccount desc
You can specify to get the top n records or to skip n records via:
https://yourservername/Sage300webapi/sdata/-/SAMLTD/GL/Accounts?$top=2
https://yourservername/Sage300webapi/sdata/-/SAMLTD/GL/Accounts?$skip=2
which is useful to page data.
Meta Data
You can get meta data for all the feeds using the $metadata tag. For instance:
https://yourservername/Sage300webapi/sdata/-/SAMLTD/$metadata
will return the meta data for all the feeds that are relevant for SAMLTD:
(Note that this is quite a large JSON object to process).
Updating/Inserting/Deleting
This initial implementation includes sufficient G/L feeds for supporting financial reporting. Hence these G/L feeds are read only at this point. We do support inserting G/L Batches, O/E Orders and A/R Customers. Many of the non-G/L feeds support updating, inserting and deleting. If the entity supports these then you can delete the record by specifying DELETE as the HTTP verb (which is easy in PostMan), similarly insert if via POST and update if via PUT or PATCH.
Generally, the best way to figure out the format of the payload to include with these is to do a GET and then use that payload as a template to build the JSON object with the data you want to update or insert.
Since these Web APIs are built on the Sage 300 Business Logic all the usual validation will take place and you will get back Sage 300 error messages in the response payload if the request fails.
Troubleshooting
Ideally the responses from the server will include error messages to tell you what went wrong, so always check these. If they aren’t helpful, then on your web server check the Web API trace log which is located at:
Sage300InstallDir/Online/WebApi/Logs/trace.log
This will usually have the raw error when something has gone wrong.
If you don’t see anything in either of these places, perhaps check your IIS log to make sure that the request didn’t get rejected for some other reason. Especially remember to include your basic authorization header.
Security
If you expose your Web Services to the general Internet, ensure that you follow all the security measures in this article. You will need to do this if you are integrating with an external cloud service or other client located outside your network. Generally, you want to keep your Web Service communications private, so they can’t be accessed by hackers or spied on by hackers. Using good practices around enforcing HTTPS is crucial here.
Summary
The set of Web Services included in the Sage 300 2016 Product Update 1 are intended to support Financial Reporting on General Ledger as well as basic e-Commerce functionality like accessing Customers and entering Orders. Part of the intent of this release is to let people play with these and provide feedback as we move to complete out the full set of Web Services for our next version.
Accessing Sage 300c’s Business Logic from the Web UIs
Introduction
In the Sage 300 VB UIs, a user would do something in the UI (press a button or tab out of a field) and then the VB UI would be notified of this and would possibly execute a number of Sage 300 Business Logic (View) calls and based on their results update various other fields and possibly provide user feedback via a message box.
In the Web UIs we want to do similar processing since we want to re-use the tried and true Sage 300 Business Logic, but we have to be careful since now the Web UI is half running as JavaScript in the Browser and half running as .Net assemblies on the server. We have to be careful of the communication between the Browser and the server since there will be quite a bit of latency in each call over the Internet. Generally, we never want one user action to result in more than one call to the server (and ideally most user actions shouldn’t result in any calls to the server).
This blog post talks about where you put your code to access the Sage 300 Business Logic and how a UI interaction in the Browser flows through the system to execute this business logic.
Architecture
In the new Web UI architecture, we access the Sage 300 Business logic from our Business Repository classes. The base classes for these provide a wrapper of the Sage 300 .Net API to actually access the Views, but hiding the details of things like session and database link management. Then above this layer are the usual ASP.Net MVC Models and Controllers.
Generally, we want to put all this logic in the Business Repository so it can be used by multiple higher level clients including the Web UIs, our new RESTful WebAPI and services which are available for other applications to utilize.
Some of the layering is in place ready for additional functionality like customization. We need provide the common interfaces that can act as the basis for programmatic customization by inserting custom modules into the processing flow via Unity Interception.
Moving VB Code
In VB we often make lots of Business Logic (View) calls all interspersed with lots of interactions with various UI controls. This code has to be separated where the Business Logic (View) calls will go in the Business Repository which runs on the server and then the part that interacts with the controls has to move to the JavaScript code running in the Browser. The Business Repository has to provide the necessary data in a single payload which the mode/controller will transport to the Browser for processing.
The easiest way for the repository to transfer data is to have the model provide extra fields for this communication. This way no extra layers need to be involved, the business repository just populates these fields and the JavaScript layers pull them out of the returned JSON object and use them.
But you only want to add so much to the model, since you don’t want it to be too cumbersome to move around and you might want more focused calls. For these we usually define special calls in the controller and these go through a services layer to execute the code in the repository. The service call only passes the exact data needed (like parameters to a function) and knows what data to expect back.
Example
Adding extra fields to the model is fairly straight forward, so let’s trace through the logic of making a services call. In this example we’ll look at the simple case of checking a customer’s credit limit in A/R Invoice Entry (which is using a stateful business repository). We’ll start up in the JavaScript code and work our way down through the layers to get an idea of who does what.
So let’s start near the top. In the A/R Invoice Entry UI there are various times when the credit limit needs to be looked up. So the JavaScript code in the InvoiceEntryBehaviour.js file has a routine to initiate this process. Note that server calls are asynchronous so the response is handled in a callback function.
showCreditLimit: function (result) {
// Open Credit Check pop up window
if (result) {
var jsonResult = JSON.parse(result);
if (jsonResult.ShowCreditCheck) {
arInvoiceEntryRepository.getCreditCheck(jsonResult.id,
sg.utls.kndoUI.getFormattedDate(jsonResult.docDate),
sg.utls.kndoUI.getFormattedDate(jsonResult.dueDate),
"n" + invoiceEntryUI.CurrencyDecimals, jsonResult.totalPaymentAmountScheduled,
jsonResult.prepaymentAmount);
} else {
onSuccess.onCreditClose();
}
}
invoiceEntryUI.ModelData.isModelDirty.reset();
},
This calls a function in the InvoiceEntryRepository.js file to actually make the call to the server:
getCreditCheck: function (customerNumber, documentDate, dueDate, decimals, invoiceAmount,
prepaymentAmount) {
var data = {
id: customerNumber,
docDate: documentDate,
dueDate: dueDate,
decimals: decimals,
totalPaymentAmountScheduled: invoiceAmount,
prepaymentAmount: prepaymentAmount
};
sg.utls.ajaxPostHtml(sg.utls.url.buildUrl("AR", "InvoiceEntry", "GetCreditLimit"), data, onSuccess.loadCreditLimit);
},
This will initiate the call to the server. The URL will be built something like servername/Sage300/AR/InvoiceEntry/GetCreditLimit. The ASP.Net MVC infrastructure will use configuration by convention to look for a matching entry point in a loaded controller and hence call the GetCreditLimit method in the InvoiceEntryController.cs file:
[HttpPost]
public virtual ActionResult GetCreditLimit(string id, string docDate, string dueDate,
string decimals,decimal totalPaymentAmountScheduled, decimal prepaymentAmount)
{
try
{
return PartialView(AccountReceivable.ARInvoiceCreditCheck,
ControllerInternal.GetCreditLimit(id, docDate, dueDate, decimals,
totalPaymentAmountScheduled, prepaymentAmount));
}
catch (BusinessException businessException)
{
return JsonNet(BuildErrorModelBase(CommonResx.NotFoundMessage, businessException,
InvoiceEntryResx.Entity));
}
}
Which will call the InvoiceControllerInternal.cs GetCreditLimit method:
internal ViewModelBase<CustomerBalance> GetCreditLimit(string customerNumber,
string documentDate,
string dueDate, string decimals
, decimal totalPaymentAmountScheduled, decimal prepaymentAmount)
{
var creditBalance = Service.GetCreditLimit(customerNumber, totalPaymentAmountScheduled,
prepaymentAmount);
if (creditBalance.CalcCustomerOverdue == CalcCustomerOverdue.Yes &&
creditBalance.CustomerBalanceOverdue > creditBalance.CustomerAmountOverdue)
{
creditBalance.CustomerCreditMessage = string.Format(
InvoiceEntryResx.CustCreditDaysOverdue,
creditBalance.CustomerDaysOverdue,
creditBalance.CustomerBalanceOverdue.ToString(decimals),
creditBalance.CustomerAmountOverdue.ToString(decimals));
}
if (creditBalance.CalcNatAcctOverdue == CalcNatAcctOverdue.Yes &&
creditBalance.NatAcctBalanceOverdue > creditBalance.NatAcctAmountOverdue)
{
creditBalance.NationalCreditMessage = string.Format(
InvoiceEntryResx.NatCreditDaysOverdue,
creditBalance.NatAcctDaysOverdue,
creditBalance.NatAcctBalanceOverdue.ToString(decimals),
creditBalance.NatAcctAmountOverdue.ToString(decimals));
}
creditBalance.CustomerCreditLimit =
Convert.ToDecimal(creditBalance.CustomerCreditLimit.ToString(decimals));
creditBalance.CustomerBalanceVal =
Convert.ToDecimal(creditBalance.CustomerBalanceVal.ToString(decimals));
creditBalance.PendingARAmount =
Convert.ToDecimal(creditBalance.PendingARAmount.ToString(decimals));
creditBalance.PendingOEAmount =
Convert.ToDecimal(creditBalance.PendingOEAmount.ToString(decimals));
creditBalance.PendingOtherAmount =
Convert.ToDecimal(creditBalance.PendingOtherAmount.ToString(decimals));
creditBalance.CurrentARInvoiceAmount =
Convert.ToDecimal(creditBalance.CurrentARInvoiceAmount.ToString(decimals));
creditBalance.CurrentARPrepaymentAmount =
Convert.ToDecimal(creditBalance.CurrentARPrepaymentAmount.ToString(decimals));
creditBalance.CustomerOutstanding =
Convert.ToDecimal(creditBalance.CustomerOutstanding.ToString(decimals));
creditBalance.CustomerLimitExceeded =
Convert.ToDecimal(creditBalance.CustomerLimitExceeded.ToString(decimals));
creditBalance.NatAcctCreditLimit =
Convert.ToDecimal(creditBalance.NatAcctCreditLimit.ToString(decimals));
creditBalance.NationalAccountBalance =
Convert.ToDecimal(creditBalance.NationalAccountBalance.ToString(decimals));
creditBalance.NatAcctOutstanding =
Convert.ToDecimal(creditBalance.NatAcctOutstanding.ToString(decimals));
creditBalance.NatAcctLimitLeft =
Convert.ToDecimal(creditBalance.NatAcctLimitLeft.ToString(decimals));
creditBalance.NatAcctLimitExceeded =
Convert.ToDecimal(creditBalance.NatAcctLimitExceeded.ToString(decimals));
return new ViewModelBase<CustomerBalance> { Data = creditBalance };
}
This routine first calls the GetCreditLimit service in InvoiceEntryEntityService.cs:
public virtual CustomerBalance GetCreditLimit(string customerNumber,
decimal totalPaymentAmountScheduled, decimal prepaymentAmount)
{
var repository = Resolve<IInvoiceEntryEntity<TBatch, THeader,
TDetail, TPayment, TDetailOptional>>();
return repository.GetCreditLimit(customerNumber,
totalPaymentAmountScheduled, prepaymentAmount);
}
Who then calls the repository GetCreditLimit routine in InvoiceEntryRepository.cs. This routine then appears to do regular View processing using the base repository wrapper routines that insulate us from the session/dblink handling logic as well as do some basic error processing:
public virtual CustomerBalance GetCreditLimit(string customerNum,
decimal totalPaymentAmountScheduled, decimal prepaymentAmount)
{
_header.Read(false);
_creditCheck.SetValue(CustomerBalance.Fields.CustomerNumber, customerNum);
_creditCheck.SetValue(CustomerBalance.Fields.CurrentARInvoiceAmount,
totalPaymentAmountScheduled);
_creditCheck.SetValue(CustomerBalance.Fields.CurrentARPrepaymentAmount,
prepaymentAmount);
_creditCheck.Process();
return _creditCheckMapper.Map(_creditCheck);
}
Finally, down in the business repository, the code should look fairly familiar to anyone you has done any C# coding using our Sage 300 .Net API. Further this code should also appear somewhere in the matching VB code, and besides being translated to using the .Net API, its become quite separated from the UI control code (in this case the JavaScript).
At the end of this all the calls return propagating the returned data back to the Browser in answer to the AJAX call that it made.
It might look like a lot of code here, but remember the business repository and JavaScript bits have corresponding VB code. Then the other layers are there to make all the code more re-usable so that it can be used in contexts like WebAPIs and allow interfaces to provide the hooks needed for customization.
Summary
This article is intended to give you an idea of where to put your code that accesses the Sage 300 Business Logic and then how to call that from the Web UIs. There are a lot of layers but individually most of the layers are fairly simple and most of the code will appear in the Business Repository and the JavaScript behavior code.
Stateless Versus Stateful Sage 300c Web UIs
Introduction
When two computers communicate they use a well-defined communications protocol. In Browser to Server communications these are often broadly categorized as either stateless or stateful. A stateless communications protocol doesn’t require that anything be remembered between calls. In web applications this is desirable for a number of reasons:
- often calls are load balanced across multiple servers (perhaps even in different locations).
- storing state in memory can be expensive and will limit how many users can be accessing a web server at once.
On the other hand, maintaining server state has some advantages:
- less information needs to be transferred between the client and server since the server knows what has gone before.
- having things handy in memory can make operations faster and more context aware.
Often in the real world these components are combined or stacked on each other, for instance the TCP protocol is stateful and then the HTTP stateless protocol is layered on top of TCP.
The Sage 300c Web UIs use both stateless and stateful technology. This article will talk about when we use each, how to program either case and what are the advantages and disadvantages of each method. Like many things in programming the lines between these two things can become quite blurred.
Since we do have stateful UIs, this does require that if you scale out to multiple web servers then the load balancer must use sticky sessions as we explained in this blog posting. Basically guarantee that all the requests from a given client go to the same web server in case they are running a stateful UI.
Stateless Sage 300 Web UIs
Most of the Sage 300c setup UIs, processing UIs and report UIs are all stateless. This means IIS could be reset between requests and things would still work. It would just be quite slow since it would need to re-open a Sage 300 session and re-open the Views to process the next request. To avoid this we keep a cache of open sessions along with their open Views. This way when a stateless request comes in we match up a session for the right company, user and with the right Views from the session pool and use that to handle the request. If a session in the pool hasn’t been used for a while we will release it and if too many sessions are in use, we will release the oldest if we need another one. This way practically speaking you usually open a session for your first stateless request and then keep using it from the cache of the duration of your work.
Every Sage 300c business logic entity (View) will have a stateless repository defined for it. This is because only the document entry UI needs stateful operation, everything else like finders, export and other UIs will want to use the stateless version. You get all the default behavior for your stateless repository by having your specific repository inherit from one of the stateless base repositories in Sage.CA.SBS.ERP.Sage300.Common.BusinessRepository.Base like FlatRepository, ProcessingRepository or InquiryRepository.
Generally stateless operation fits very well with ASP.Net MVC since this is the natural way that it works. There is a lot of infrastructure to pass the data model back and forth between the client and server in a stateless manner. Add to that our use of knockout data binding and this makes most basic CRUD operations all handled by the framework.
This doesn’t mean that a stateless UI can’t dynamically interact with the user. By default, you have the CRUD operation off the basic navigation and save/delete buttons. But you can certainly add your own custom AJAX calls to dynamically update areas of your form. We provide support in the framework to update things like descriptions of external keys, but in fact you have a lot of power to make your UI very interactive and polished.
Stateful Sage 300 Web UIs
Most of the main Accounting document entry UIs are stateful. This includes UIs like Order Entry, Purchase Orders, G/L Journal Entry and A/R Invoice Entry. These UIs have a lot of sophistication in building up the Accounting document and the size of these documents can be quite large. Sending this entire document back and forth between the browser and the server as a JSON object is quite impractical. When one of these UIs is started, a session and set of Views are assigned to it for the duration that it runs. As changes are made to the UIs they are sent to the server to make the attendant changes to the server model.
To create a stateful business repository, just inherit from one of the stateful base repositories in Sage.CA.SBS.ERP.Sage300.Common.BusinessRepository.Base.Statefull like BatchHeaderDetailRepository or SequencedHeaderDetailRepository.
When you exchange information between the server and browser rather than send the entire document back and forth, you tend to send one component back and forth like one order detail line or the order header. This then reduces the amount of data transmitted, but more importantly greatly reduces the size of the JSON object that JavaScript needs to process. If the transferred document gets too large then JavaScript processing speed can become a real bottleneck.
You don’t need to send something to the server every time every field changes, only the fields that cause some business logic to execute. So you don’t need to do anything say when the user changes a description field. When a field changes that causes business logic on the server to execute then you need to send that to the server and get back all the fields that changed as a result. We tend to mostly do this using the default ASP.Net MVC mechanisms along with the knockout data binding. But only doing it for the component record. So say the item number in a detail record is changed, then we would refresh that record by sending it to the server, which would set the changed fields, do the Sage 300 business logic operations (puts and gets) and return the newly updated record where a number of fields have changed.
This works for most cases, but there is room for optimization. If something is rather a major operation, you might want a more tailored AJAX call for the processing. Similarly, if multiple records are affected you might want to denormalize a bit to reduce the dataflow. For instance, when dealing with details, often header totals change, but you might want to associate these with the detail, rather than also refreshing the header.
Summary
This was a quick overview of the main modes or how our Sage 300 Web UIs operation. Stateless operation reduces server overhead, while stateful operation lets us fully leverage our existing business logic, while reducing our bandwidth requirements.
Adding a Grid to Your Sage 300 Web UI
Introduction
The grid or table control is a key element for data entry in any Accounting application. With Sage 300 we use the grid control to enter things like Order or Invoice details. Interactions with a grid control tend to be quite complex. The data has to be managed so it is loaded only a page at a time (often called virtual scrolling), since there could be thousands of detail lines and loading them all at once would be quite slow. There is the ability to edit, delete and add lines. Tabbing has to be handled well to enhance data entry. People also have the ability to re-arrange the grid columns, hide columns and then expect these changes to be remembered.
This blog article will talk about the key elements to adding a grid control to your Sage 300 Web UI and what sort of support you need in your UI to support all the desired functionality. A fair bit is handled for you in the Sage 300 Web UI Framework, however you have to handle various events and there is a lot of power to add your own programming.
Configuration
There is a lot of support for standard grid operations in the Sage 300 Web UI framework. Much of this is controlled by a config JSON object which is passed to our @Html.KoKendoGrid function that defines the grid in the Razor View. This file defines a number of properties of the grid along with a number of standard callout functions you can define to add your custom processing. The good news is we have a utility to generate much of this from the ASP.Net MVC Model.
JavaScript Generation Utility
To generate this code we provide a utility which will generate the Razor View code and a lot of the standard JavaScript code that you need. So for instance the code for the Razor View might be:
And then some of the JavaScript code for the config object might be:
Server Side Pagination
In our VB UIs, we had virtual scrolling in our grids, which would basically bring in a page or two at a time. It supported scrolling one page ahead or one page back, go to the top or bottom but you couldn’t go to an arbitrary point in the file without searching (in fact the scroll bar would always be at the top, bottom or right in the middle). In the Web UIs we use the Kendo UI Grid control and try to keep the scrolling mechanism standard for the Web, which means the control tells you how many pages there are and lets you go to any page you like as well as going to the next or previous one.
We provide a lot of the support for doing this in our business repository base classes which expose a get method which takes the page number, page size, filter and order as parameters. Then as long as you match and set the configuration data in the grid’s JavaScript config JSON object, you get the pagination support. There are a couple of things to keep in mind, one is that we rely on our filterCount API call, which translates directly to a SQL statement, which means it can only count based on database fields and not calculated fields, so you can’t restrict the records in your grid based on any calculated fields or the count will be wrong (if you really need this then you need to disable the ability to go to a specific page). You also need to have a hidden SerialNumber column in the grid which contains the record number.
ViewListControl vs AccpacGrid
In our VB UIs, we actually had two grid controls. One was the ViewListControl which would show a separate View record in each line and supported virtual scrolling. Then we had the AccpacGrid control which would usually show an array of fields from a single record (like tax information, or perhaps item structure information).
In the Web UIs we only have one Grid control. It naturally works more like VB’s ViewListControl. So how do we handle the other case of the AccpacGrid? We do this in our controller by translating the array of fields into what looks like a list of details. This way to the Grid control, it doesn’t really see a difference. Usually you don’t need to enable virtual scrolling in this case since there is typically 5 or 10 records and you just provide them all at once. So typically your ViewModel will have a list of records which the controller will populate and then this is set as the Grid’s data source.
Editing
Like VB, the intent of editing cells is to place the correct edit control over the grid cell to perform the edit. There is a lot of framework support for this as well as lots of callouts for you to do your own custom processing. The same is true for adding a new line and deleting a set of lines (note that the Web UI grid supports multiple selection). Also note that the add line, delete line, edit columns buttons aren’t part of the Grid, these are separate buttons styled to look like part of the grid in a region just above the Grid. This means you can easily add your own buttons and controls to this area if you wish.
Saving Preferences
We have API support to help with loading and saving grid column preferences. In VB these are stored in the *_p.ism files, in the Web UIs these are stored in the SQL database in the new USRPROP table. So emptying USRPROP is the Web UI equivalent of deleting the *_p.ism files. Generally, we want to move everything into the database and remove our reliance on the shared data folder over time.
Summary
This article was just a quick introduction to adding a Grid control to a Web UI. Similar to the VB UIs, the grid control is potentially quite complicated as it supports a lot of diverse functionality. But, if you are doing fairly standard functionality, look for a lot of support in the Web UI framework to help you get the job done.
Adding Your Application to the Home Page
Introduction
We’ve been talking about how to develop Web UIs using our beta SDK. But so far we’ve only been running these in Visual Studio, we haven’t talked about how to deploy them in a production system. With this article we’ll discuss how to add your menus to the home page. Which files need to be installed where and a few configuration notes to observe.
We’ll continue playing with Project and Job Costing, so first we’ll add PJC on to the end of the More menu in the Home Page:
As part of this we’ll build and deploy the PJC application so that we can run its UIs in a deployed environment, rather than just running the screens individually in Visual Studio like we have been.
The Code Generation Wizard
When you create your solution, you get a starting skeleton Sage.PM.BusinessRepository/Menu/PMMenuModuleHelper.cs. I’m using PM since I’m playing at creating PJC Web UIs, but instead of PM you will get whatever your application’s two letter prefix is. If you don’t have such a prefix, remember to register one with Sage to avoid conflicts with other Sage 300 SDK developers. Similarly, I use Sage as my company, but in reality this will be whatever company name you specified when you created the solution. This MenuModuleHelper.cs file specifies the name of the XML file that specifies your application’s Sage 300 Home Page menu structure. This C# source file is also where you put code to dynamically hide and show your program menu items, so for instance if you have some multi-currency only UIs then this is where you would put the code to hide them in the case of a single currency database (or application).
The solution wizard creates a starting PMMenuDetails.xml file in the root of the Sage.Web project. Then each time you run the code generation wizard it will add another item for the UI just generated. This will produce rather a flat structure so you need to polish this a bit as well as fix up the strings in the generated MenuResx.resx file in the Sage.PM.Resources project. This resource file contains all the strings that are displayed in the menu. Further you can optionally update all the generated files for the other supported languages.
One caveat on the MenuDetails.xml file is that you must give a security resource that the user has rights to or nothing will display. Leaving this out or putting N/A won’t work. One good comparison is that since these are XML files you can see all of Sage’s MenuDetails.xml files by looking in the Sage 300\Online\Web\App_Data\MenuDetail folder. Note that the way the customize screen works, it removes items and puts them in a company folder under these. It will regenerate them if the file changes, but if you have troubles you might try clearing these folders to force them to be regenerated.
Below is a sample XML element for a single UI to give a bit of flavor of what the MenuDetails.xml file contains.
<item>
<MenuID>PM4001</MenuID>
<MenuName>PM4001</MenuName>
<ResourceKey>PMCostType</ResourceKey>
<ParentMenuID>PM2000</ParentMenuID>
<IsGroupHeader>false</IsGroupHeader>
<ScreenURL>PM/CostType</ScreenURL>
<MenuItemLevel>4</MenuItemLevel>
<MenuItemOrder>2</MenuItemOrder>
<ColOrder>1</ColOrder>
<SecurityResourceKey>PMCostType</SecurityResourceKey>
<IsReport>false</IsReport>
<IsActive>true</IsActive>
<IsGroupEnd>false</IsGroupEnd>
<IsWidget>false</IsWidget>
<Isintelligence>false</Isintelligence>
<ModuleName>PM</ModuleName>
</item>
Post Build Utility
Now that we have our menu defined and our application screens running individually in debug mode inside Visual Studio, how do we deploy it to run inside IIS as a part of the Sage 300 system? Which DLLs need to be copied, which configuration files need to be copied and where do they all go? To try these steps, make sure you have the latest version of the Sage 300 SDK Wizards and the matching newest beta build.
The Wizard adds a post build event to the Web project that will deploy all the right files to the local Sage 300 running in IIS. The MergeISVProject.exe utility can also be run standalone outside of VS, its a handy mechanism to copy your files. Its usually a good idea to restart IIS before testing this way to ensure all the new files are loaded.
This utility basically copies the following files to places under the Sage300\online\web folder:
- xml is the configuration file which defines your application to Sage 300. Think of this as like roto.dat for the Web. It defines which are your DLLs to load using unity dependency injection when we start up.
- App_Data\MenuDetail\PMMenuDetails.xml is your menu definition that we talked about earlier.
- Areas\PM\*.* area all your Razor Views and their accompanying JavaScript. Basically anything that needs to go to the Browser.
- Bin\Sage.PM.*.dll and Bin\Sage.Web.DLL are the DLLs that you need to run. (Keep in mind that I’m using Sage for my company name, you will get whatever your company is instead of Sage in these).
With these in place your application is ready to roll.
Update 2016/01/20: This tool was updated to support compiled razor view and the command line is now:
Call "$(ProjectDir)MergeISVProject.exe" "$(SolutionPath)" "$(ProjectDir)\"
{ModuleName}MenuDetails.xml $(ConfigurationName) $(FrameworkDir)
Plus it is only run when the solution (not an individual project) is set for a “Release” build.
Compiled Views
When we ship Sage 300, all our Razor Views are pre-compiled. This means the screens start much faster. If you don’t compile them, the when first accessed, IIS needs to invoke the C# compiler to compile them and we’ve found that this process can take ten seconds or so. Plus, the results are cached and the cache is reset every few days causing this to have to happen over again. Another plus is that when pre-compiled the Views can’t easily be edited, which means people can’t arbitrarily change these causing a problem for future upgrades.
Strangely Visual Studio doesn’t have a dialog where you can set whether you want your Views pre-compiled, you have to edit the Sage.Web.csproj file directly. And you need to change the XML value:
<MvcBuildViews>false</MvcBuildViews>
Between true and false yourself in a text editor.
The Sage 300 system is set so that it only runs compiled Razor Views. If you do want to run un-compiled Razor Views, then you need to edit Sage300\online\web\precompiledapp.config and change updatable from false to true.
Beta note: As I write this the MergeISVProject utility doesn’t copy compiled Views properly. This will be fixed shortly, but in the meantime if you want to test with compiled Views you need to copy these over by hand.
New Beta note: This tool now fully supports compiled razor views.
Beta note: The previous beta wouldn’t successfully compile if you were set to use compiled Views, this has been fixed and the solution wizard now includes all the references necessary to do this.
Summary
This article was meant to give a quick overview of the steps to take once you have a screen working in Visual Studio debug mode, to being able to test it running in the Sage 300 Home Page as part of a proper deployment. Generally, the tools help a lot and hopefully you don’t find this process too difficult.
Sage 300 Web UI SDK – Adding UI Controls
Introduction
In my last posting I showed how to quickly create an empty Sage 300 Web UI by running our two new wizards from Visual Studio. In this article we’ll look at how to add some visual controls to this project and talk a bit about some of the issues with doing this, namely about using our provided HTML helper functions and CSS styling.
We’re basically going to continue on and add the visual elements for the PJC Cost Types setup screen. We won’t write any JavaScript yet, so the only functionality will be that provided by the code generator and the default data binding support. This still give quite a bit as you can navigate, use the finder, delete records and save updates.
The UI Wizard discussed last week produces a simple starting page with the standard heading controls, the key field and the Save and Delete buttons. These are all wired up to Javascript and working. This makes our life much easier when adding the rest of the controls.
The only thing you need to do manually is change the Starting Page to: “/OnPremise/PM/CostType” on the Web tab of the Web project’s properties. Then it will compile and run yielding:
Adding the Parts
ASP.Net MVC Razor Views are a technique to dynamically generate our HTML by embedding C# code in an HTML template. When the HTML needs to go to the browser the C# code is executed and it usually generates more HTML into the template, so that pure dynamically generated HTML is transmitted to the Browser. The Razor View system is very extensible and it allows a lot of extensibility which we do by adding a large set of helper functions.
Below is the screen once we add some more controls. I showed with a record loaded since that part works with the generated code. The dates and bottom combo box aren’t working yet since we need to add some JavaScript code to help them out.
The source code for this screens Razor View (the partial view part) is:
(That didn’t work so well. Apparently WordPress ate all the div’s, I’ll do a bit of research to see if I can fix this, so a bit is missing from the below code. I also added some line breaks so the code doesn’t go off the right of the page).
@* Copyright © 2015 Sage *@
@model Sage.Web.Areas.PM.Models.CostTypeViewModel<Sage.PM.Models.CostType>
@using Sage.PM.Resources.Forms
@using Sage.CA.SBS.ERP.Sage300.Common.Web.AreaConstants
@using Sage.CA.SBS.ERP.Sage300.Common.Resources
@using Sage.CA.SBS.ERP.Sage300.Common.Web.HtmlHelperExtension
@using Sage.CA.SBS.ERP.Sage300.Common.Models.Enums
@using AnnotationsResx = Sage.CA.SBS.ERP.Sage300.Common.Resources.AnnotationsResx
@Html.ConvertToJsVariableUsingNewtonSoft("CostTypeViewModel", Model)
@Html.Partial("~/Areas/PM/Views/CostType/Partials/_Localization.cshtml")
<section class="header-group">
@Html.SageHeader3Label("CostTypeHeader", CostTypeResx.Entity)
@if (Model.UserAccess.SecurityType.HasFlag(SecurityType.Modify))
{
@Html.KoSageButton("btnNew", null, new { @value = CommonResx.CreateNew, @id = "btnNew", @class = "btn-primary" })
}
@Html.Partial(Core.Menu, Model.UserAccess)
</section>
<section class="required-group">
@Html.SageLabel(CommonResx.RequiredLegend, new { @class = "required" })
</section>
@Html.SageLabel("CostTypeCode", CostTypeResx.CostTypeCode, new { @class = "required" })
@Html.KoSageTextBoxFor(model => model.Data.CostTypeCode, new { @sagevalue = "Data.CostTypeCode", @valueUpdate = "'input'" }, new { @id = "txtCostTypeCode", @class = "default txt-upper", @formatTextbox = "alphaNumeric" })
@Html.KoSageButton("btnLoadCostTypeCode", null, new { @id = "btnLoad", @class = "icon btn-go", @tabindex = "-1" })
@Html.KoSageButton("btnFinderCostTypeCode", null, new { @class = "icon btn-search", @id = "btnFinderCostTypeCode", @tabindex = "-1" })
@Html.ValidationMessageFor(model => model.Data.CostTypeCode)
</div>
@* End of generated header, next is code I wrote. *@
@Html.SageLabelFor(model => model.Data.Description)
@Html.KoSageTextBoxFor(model => model.Data.Description, new { @value = "Data.Description", @valueUpdate = "'input'" }, new { @id = "tbDescription", @class = "large" })
@Html.ValidationMessageFor(model => model.Data.Description, null)
</div>
@Html.SageLabelFor(model => model.Data.LastMaintained)
@Html.KoSageTextBoxFor(model => model.Data.LastMaintained, new { @value = "Data.ComputedLastMaintainedDate" }, new { @disabled = "true", @class = "default" })
@Html.KoSageCheckBox("chkStatus", false, new { @sagechecked = "Data.Status" }, new { @id = "chkStatus" })
@Html.SageLabel(CommonResx.InactiveAsOfDate, null, new { @for = "chkStatus", @class = "" })
@Html.KoSageTextBox("txInactiveDate", new { @value = "Data.ComputedInactiveDate" }, new { @disabled = true, @class = "default " })
</div>
</div>
@Html.SageLabelFor(m => m.Data.CostClass, new { @id = "lblCostClass", @class = "" })
@Html.KoSageDropDownList("Data_CostClass", new { @options = "CostClass", @sagevalue = "Data.CostClass", @optionsText = "'Text'", @optionsValue = "'Value'" }, new { @class = "w188" })
</div>
@* End of my code, next is the generated footer. *@
<section class="footer-group">
@if (Model.UserAccess.SecurityType.HasFlag(SecurityType.Modify))
{
@Html.KoSageButton("btnSave", new { }, new { @value = CommonResx.Save, @id = "btnSave", @class = "btn-primary" })
@Html.KoSageButton("btnDelete", new { }, new { @value = CommonResx.Delete, @id = "btnDelete", @class = "btn-primary" })
}
</section>
</div>
I put comments around the code I wrote so you can see what is generated by the code generation wizard versus the code you add later. Basically this is a mixture of C# code (each line starts with @) and HTML which is in the angle brackets.
There isn’t much layout in this file because this is handled by the CSS. For simple screens like this one there are sufficient styles in the provided Sage standard CSS file that we don’t need to add any CSS. As a result, the HTML is actually fairly simple and really just used to logically group things.
Notice that we use Sage provided extension functions to create all the controls. This provides us with the hooks to provide quite a bit of standard functionality. For instance, we don’t want any hard coded strings in our HTML, otherwise we would force our translators to produce a different copy of the HTML for each language and then we would have to maintain all these files. Here we just use the helper function and it will look up the correct string from the language resource appropriate for the user’s language setting. This also gives us the ability to change the underlying control without changing all the HTMLs. So we can use a different date picker control for instance by changing the code our helper function emits rather than editing each HTML individually. Basically giving us a lot of global control over the behavior of the product.
These helper functions also can setup databinding. Any helper that start with ko will bind the data to the model (more precisely the viewmodel). We used ko since we use knockout.js for databinding which perhaps isn’t the best choice of function naming since again we can change the mechanism in the background without effecting the application code.
Notice there is a partial view called _Localization.cshtml that is included. This provides any localized strings that are needed by JavaScript. So anything referenced in here will be generated in the correct language when the page is loaded.
There is a strange call to “ConvertToJsVariableUsingNewtonSoft” near the top of the file. This is to load a copy of the model into JavaScript during page loading. This means we don’t need to do an initialization RPC call to get the model (Sage 300 View) meta data. Basically the usual empty screen then has the default data and meta data as a starting point.
Summary
This was a quick look at the Razor View part of our Web UIs. This is where the controls and layout are specified. Layout is handled by CSS and data binding is provided to greatly reduce required coding. Next we’ll start to look at the JavaScript that runs behind the scenes in the Browser.
Introducing the SDK for the Sage 300 Web UIs
Introduction
Sage 300 has always provided an SDK to allow ISVs to create accounting applications in the same way that we create our own applications like General Ledger or Order Entry. In the past our internal application developers have usually only had the SDK installed for doing their own work.
Further these ISVs can install their applications into a working Sage 300 installation by just copying a specific set of folders. We then will see these folders and allow that module to be activated and used.
The new Web Screens will have the same ability to create custom accounting applications and to easily add them to one of our installations.
We will be starting the beta program for this SDK shortly, so this should start to give people a preview of what is coming.
This overview assumes you have an existing SDK program. That you have Sage 300 Views and VB UIs. That you have an activation UI and can activate your module, making it known to Sage 300. This is just how to create the actual Web UI components.
The Module Creation Wizard
We are first going to create a Visual Studio solution for your Accounting module. Then we will use another wizard to add the screens to this solution. The solution will contain several projects that correspond to the parts of a UI screen. This is different than each screen having its own project. This stays in tune with how the ASP.Net MVC tools create solutions and allows us to leverage everything built into Visual Studio.
Create a Visual Studio project. We provide a project wizard to create your solution. Let’s pretend we are going to create the Project and Job Costing module:
The wizard will then ask you some questions about your module.
And then create a solution with the correct project structure for your application.
This solution now has the correct structure to add screens to, plus it has all the module level compents and references. This will compile, but there isn’t anything to run yet.
The UI Wizard
Now you create your separate UIs by running our Code Generation Wizard. You get this by right clicking on the solution and choosing it from the context menu.
This then brings up a wizard that you can step through.
Depending on what you choose for the Code Type, you will get a relevant screen for the details. If you choose Flat you will get the following:
The View ID will be used to generate the model and business repository for this screen. Basically it will use the View meta-data to generate C# classes that will provide most of the functionality to perform standard CRUD type operations.
Next you get a screen to specify which resource file to use for your stirngs:
Like all our previous SDKs there is full support for producing a multi-language product. Of course as in the past its up to you whether you leverage this or not.
Now you get to select some options of features to include:
With in the Sage accounting modules the I/C, O/E and P/O Views contain more functionality for determining if a fields is editable or not than do the G/L, A/R or A/P screens. The “Generate Dynamic Enablement” indicates whether all the checking editbable is done by your UIs or by your Views.
Now its time to confirm to generate the code:
And finally you get the list of files that it generated for you:
The wizard has used the meta-data from the Sage 300 Business Logic View, in this case the PJC Cost Types view to generate the code for a Business Reposity to use and an empty HTML screen.
Real Work
Running these wizards is quite quick and hopefully they give you a good start. The solution will compile and run, but all you get is a blank screen, since the generated Razor View just contains a TODO to add some controls. Now the real work begins adding controls to your Razor Views, adding custom processing logic and generally wiring things up.
You can now use the code-debug-fix cycle within Visual Studio and hopefully find it a productive way to create your Sage 300 screens.
In future articles I’ll talk about creating the Razor Views, using the extension functions we supply to help make this process easier and the CSS that is used to give the screens a standard look and feel. Then we will need to go into how to wire up finders, perform custom processing and all the other things required to make a Sage 300 screen.
Summary
This was a very quick look at the SDK for our Web Screens. We haven’t covered any coding yet, but we will. All the functionality used is built into the DLLs installed with Sage 300, so the actual SDK component is quite small. Besides the wizard, there is a lot of framework support to help you with common components and abstractions to hide some of the details.