Stephen Smith's Blog

Musings on Machine Learning…

Posts Tagged ‘technology

Customization: A Two-Edged Sword

with 2 comments

Introduction

When implementing a mid-market ERP system, it’s often suggested that the base ERP should provide 80% of the needed functionality and then the other 20% is provided by customization. The rationale behind this is that although GAAP (Generally Accepted Accounting Principles) is standard, each business is unique and has unique requirements in addition to GAAP, especially in modules outside of the core financials.

All Sage ERP products offer quite high levels of customization ability. Generally we use this as a competitive advantage to make sales, since we can more closely match a customer’s requirements. However some customizations can lead to quite high costs and problems down the road (perhaps a version or two later).

As more and more customers are looking to move to the Cloud, we need to rethink customization and what we are trying to achieve. This blog post looks at some of the problems with Customization and some things to consider when recommending these. These apply to both hosting current ERP’s in the Cloud as well as to how we have to think about Connected Services and Features as a Service (FaaS) in the Cloud.

Problems

This is a rather unfair list of many of the problems we typically see with customizations:

  1. Cannot upgrade to a new version of the software because it means doing the customizations all over again.
  2. An initial product implementation fails because the costs and delivery of the customization go over budget and behind schedule.
  3. Some customizations cause system stability issues. For instance adding SQL triggers to the database. Then if the trigger fails, the transaction that invoked it fails with a really hard to diagnose error.
  4. Some customizations cause performance load problems. Perhaps a custom report or inquiry causes a SQL query to run that takes hours to finish, slowing everyone else in the meantime.
  5. In a hosted version everyone runs the same programs, so some customizations can prevent you moving to the cloud.
  6. Extra testing is required for every product update or hotfix to ensure they don’t interfere with customization. This restricts the use of things like automatic updates.
  7. Getting support and diagnosing problems. Customer support has a hard time diagnosing problem in and around customizations because they don’t know what the customizations are meant to do and what they might affect.

Moving to the Cloud

As we look to move more customers to the cloud, we have to deal with all these customization issues. Some level of customization is necessary, so the question to some degree becomes how much? In the cloud we want everyone sharing the same set of programs, so no customized DLLs or EXEs. In the cloud we want everyone to immediately receive hotfixes and product updates, so customizations must be upgrade safe. Further when a new major version comes out, we want everyone in the cloud moved to this new version in a quick, transparent and automated fashion. Hence any customizations allowed in the cloud can’t prevent data activations and can’t require a lot of special testing for each new version.

Causes

So what causes many of the problems above? A lot of customization can be a sign of an initial incorrect product choice resulting in trying to fit a square peg into a round hole. It can also indicate an overly eager to please sales team promising the product will do many things that it normally doesn’t. Plus many companies main line of business is developing customizations, so providing these is a major source of revenue.

A big cause of problems with upgrading customization is the result of database schema changes in the core ERP. This is usually the result of adding new features or streamlining functionality for performance and scalability reasons.

Often business logic changes can have unforeseen effects on customizations, even though the database schema doesn’t change, perhaps a customization relies on something in the way the business logic used to work.

Solutions

As solutions move to the cloud the nature and scope of customizations is changing. This affects both vendors like Sage, in how we need to make sure all customizations can be automatically updated and it affects customization consultants now that you can’t install EXEs or DLLs into the ERP or CRM system itself. So what do we do, since there are still vital business needs that need to be addressed?

First off, we as a vendor have to be much more disciplined in how we change our applications from version to version. Long gone are the days when we could just make any change we wanted and then throw it over the fence to the Business Partners to figure out how to accommodate in the whole eco-system. This made the cost of upgrading quite expensive and detracted from the overall customer experience because generally upgrades introduced all sorts of problems that then needed to be stamped out. As we begin to automatically update people, both in the cloud and on-premise we have to ensure the whole process is automatic and seamless. We have to:

  • Maintain application API compatibility so customizations and ISVs continue to work.
  • Limit schema changes so that we don’t break customizations or ISVs. Mostly just add fields or tables and even if fields are no longer really used, leave them in for compatibility.
  • Provide tools to automatically convert anything beyond the database that is affected like Crystal Reports or VBA macros, so no additional work is required.
  • Ensure that we can upgrade independent of database state, i.e. don’t require that all batches be posted or such things.
  • Work closely with ISVs and Business Partners through alpha, beta and early adopter programs to ensure the entire ecosystem won’t be disrupted by new features and versions.
  • More tightly control the customizations that are allowed in the cloud. Even for custom reports and VBA macros, have rules and monitoring procedures so that badly behaved ones can be found and removed (or fixed).

When we are running our application in the cloud, how do we perform common integration type customizations? A great many of our clients create programs to feed data into and out of an ERP to integrate to things like perhaps a custom subscription billing service, a company shopping web site or one of any number of things. Since you can’t install EXEs or DLLs and even if you could do this via a VBA macros, chances are external communications would be blocked by a firewall, so what do you do? For these cases you have to switch to using Web Services. Which in the Sage world means SData. With SData you will be able to create Cloud to On-Premise integrations or Cloud to Cloud integrations. Basically you are removing the requirement that the two applications being integrated are on the same LAN.

Summary

Changing the thinking on customization will take time, but the industry and customer expectations are changing. Sage and its Business Partners need to be more disciplined in how we all customize to ensure we don’t create longer term problems for customers. In the Cloud world these problems show up much more quickly than in the on-premise world. We have to re-evaluate the technologies we are using and re-evaluate how our common customization design patterns are changing.

I tend to think once we get through some transitional pain, that we will come out the other side with a more powerful and more sustainable customization model that will lead to a better customer experience.

Written by smist08

September 29, 2012 at 6:20 pm

Devices and the Cloud

leave a comment »

Introduction

We are currently seeing a proliferation of devices being release from a new line of Tablets from Amazon, new tablets and phones from Samsung, the iPhone 5 from Apple and a certain amount of anticipation for all the Windows 8 based devices that should be coming next month.

Amazon is quickly becoming a major player in the tablet market; they have branched out from just delivering e-book readers to having a complete line of high performance tablets at very competitive prices. Amazon then makes most of its money selling books, music and movies to the owners of these tablets. These are Android based tablets that have been setup and configured with a fair bit of extra Amazon software to integrate seamlessly with the Amazon store.

In the Android world, Samsung continues to deliver exceptional phones and tables of all shapes and sizes.

Apple has just shipped the iPhone 5 and we would expect new iPads sometime early next year.

Meanwhile Microsoft has released to manufacturing Windows 8 and devices based on this should be appearing on or after October 26.

With each generation of these devices we are getting faster processors, more memory, higher resolution displays, better sound quality, graphics co-processors, faster communications speeds and a plethora of sensors.

With so many players and with the stakes so high (probably in the trillions of dollars), competition is incredibly intense. Companies are competing in good ways, producing incredible products at very low prices and driving an incredible pace of innovation. People are also competing in rather negative ways with very negative attacks, monopolistic practices, government lobbying and high levels of patent litigation. The good news is that the negative practices don’t seem to be blunting the extreme innovation we are seeing. We are getting used to being truly amazed with each new product announcement from all these vendors. Expectations are getting set very high at each product announcement and launch event. There is collateral damage with once powerful companies like RIM or Nokia making a misstep and being left in the dust. But overall the industry is growing at an incredible pace.

Amazing new applications are being developed and released at a frenetic pace for all these devices. They know where you are, what you are doing and offer advice, tips or provide other information. There are now thousands of ways to communicate and share information. They all have deep integrations with all the main social media services. They communicate with natural language and high levels of intelligence.

The Data Cloud

Many of these devices are extremely powerful computers in their own right. The levels of miniaturization we’ve achieved is truly astounding. These devices are all very different, but what they share is that they are all connected to the Internet.

We now access all our data and the Internet from many devices, depending on what we are doing. We many have a powerful server that we do processing on, we may have a powerful desktop computer with large screen monitors, we may have a laptop that we use at work or at home, we may have a tablet computer that we use when travelling or at the coffee shop, we all have smart phones that we rarely use to phone people with, we may have a smart TV, we may have a MP3 player and so on. The upshot is that we no longer use exclusively use one computing device. We now typically own and regularly use several computing devices; it seems most people have a work computer, a home computer, a tablet and at least one smart phone. People want to be able to do various work related tasks from any of these. So how do we do this? How do we work on a document at work, then later we get a thought and want to quickly update it from our phone? Most of these devices aren’t on our corporate LAN. Typically we are connecting to the Internet via Wifi or via a cell phone network. The answer is that we are no longer storing these documents on a single given computer. We are now storing these to a centralized secure cloud storage (like iCloud, GDrive, or DropBox). These clouds are a great way to access what we are working on from any network and any device. Then each device has an App that is optimized for that device to offer the best way possible to work on the document in the given context. Many of these services like Google Apps even let multiple people work on these documents at once completely seamlessly.

Further many of these devices can keep working on our documents when we are off-line. For instance working on something on an iPad while on an airplane. Then when the device is back on-line it can synchronize any local data with the master copy in the data cloud. So the master copy is in the cloud, but there are copies off on many devices that are going on-line and off-line. Then modern synchronization software keeps then all in sync and up to date. Google had an interesting add for its Chrome notebooks where they keep getting destroyed, but the person in the add just keeps getting handed a new one, logs in, and continues working from where he left off.

The Grid

What we are ending up with is a powerful grid of computing devices accessing and manipulating our data. We have powerful servers in data centers (located anywhere) doing complex analytics and business processes, we have powerful laptop and tablet computers that can do quite powerful data input, editing and manipulation. Then we have small connected devices like phones that are great for quick inquiries or for making smaller changes. Our complete system as a whole consists of dozens of powerful computing devices all acting on a central data cloud to run our businesses.

In olden days we had mainframe computing where people connected to a mainframe computer through dumb terminals and all processing was done by the mainframe computer that was guarded and maintained by the IS department. Then the PC came along and disrupted that model. We then had separate PCs doing their own thing independent from the central mainframe and independent from the IS department. Eventually this anarchy got corralled and brought under control with networks and things like domain policies. Then we took a turn back to the mainframe days with Web based SaaS applications. Rather than run on the corporate data center, these run in the software vendors datacenter and the dumb terminal is replaced by the Web Browser. This then re-centralized computing again. Now this model is being disrupted again with mobile devices. Where the computing power is back in the hands of the device owners who now controls what they are doing once again.

The difference now from the PC revolution is that everything is connected and out of this highly connected vast grid of very powerful computing devices we are going to see all new applications and all new ways of doing things. It’s interesting how quickly these disruptive waves are happening.

Summary

It’s really amazing what we are now taking for granted, things like voice recognition, you can just ask your phone a question and actually get back the correct answer. The ability to get a street view look at any location in the world. It’s an exciting time in the technology segment with no end in sight.

In this articles I was referring to documents, which most people would associate with spreadsheets or word processing documents. But everything talked about here can equally apply to all the documents in an ERP or CRM system such as Orders, Invoices, Receipts, Shipments, etc. We will start to see the same sort of distributed collaborative systems making it over to this space as well.

Written by smist08

September 22, 2012 at 8:07 pm

Reporting Via Macros

with 73 comments

Introduction

With our Sage 300 ERP 2012 release we updated our Crystal Reports runtime to the newest Crystal 2011 runtime (SP3 actually). The intent is to move to a fully supported version of Crystal Reports, so as they adapt to things like Windows 8 and Windows 2012 Server, we know we are fully supported and can get updates for any problems that show up. Plus it means that people customizing reports can take advantage of any of the new features there.

For reports you can print to preview, print to file or print directly to a printer. Then we have various options for printing from various web contexts like Quotes to Orders. You can drive reports from our regular forms, or you can write VBA macros that automate the reporting process.

This blog post is really for people that are controlling printing reports programmatically and are more affected by the changes in the Crystal runtime and more specifically changes in the Crystal Reports API.

Headache for Customizers

Our intent was that people performing customizations will use our API to drive Crystal Reports. Then your programs are upgrade safe since we maintain compatibility of our COM API. However it turns out that quite a few people have automated the report process by writing to the Crystal COM API directly.

This then leads to a problem because Crystal dropped support for their COM API. Not only did they drop support for it, but they removed it completely from the product. Hence anyone that is writing directly to the Crystal COM API will be broken by the Sage 300 ERP 2012 release. At least for new installs. If you had an older version and don’t un-install it, then you can still use the older version of the Crystal runtime (since it will be still there), but that isn’t a good long term solution as people upgrade computers and go to newer operating systems like Windows 8.

Crystal Reports now only supports a .Net Interface and a Java interface. For this version we had to change our internal interface to Crystal from COM to .Net. (The newer Web portal parts use the Java interface and so were ok).

Printing without User Intervention

It appears that one of the common reasons to go to the Crystal API directly is to print to file without any manual intervention. Often if you choose File as a print destination then we prompt you for the format and then prompt you for the file name to save to. People want to set these programmatically. Our API does have the ability to do this in a couple of situations.

Below is a macro I recorded to print O/E Quotes. I deleted any extra code, like the error handler to make it a bit more compact. Then I edited the destination and format to change the print destination to PD_FILE and the format to PF_PDF.

Sub MainSub()
Dim temp As Boolean
 Dim rpt As AccpacCOMAPI.AccpacReport
 Set rpt = ReportSelect("OEQUOT01[OEQUOT01.RPT]", "      ", "      ")
 Dim rptPrintSetup As AccpacCOMAPI.AccpacPrintSetup
 Set rptPrintSetup = GetPrintSetup("      ", "      ")
 rptPrintSetup.DeviceName = "HP LaserJet P3010 Series UPD PS"
 rptPrintSetup.OutputName = "WSD-ad0e8bc6-396c-4e50-84c7-fab17beaf18a.006a"
 rptPrintSetup.Orientation = 1
 rptPrintSetup.PaperSize = 1
 rptPrintSetup.PaperSource = 15
 rpt.PrinterSetup rptPrintSetup
 rpt.SetParam "PRINTED", "0"              ' Report parameter: 4
 rpt.SetParam "QTYDEC", "0"               ' Report parameter: 5
 rpt.SetParam "SORTFROM", " "             ' Report parameter: 2
 rpt.SetParam "SORTTO", "ZZZZZZZZZZZZZZZZZZZZZZ"   ' Report parameter: 3
 rpt.SetParam "SWDELMETHOD", "3"          ' Report parameter: 6
 rpt.SetParam "PRINTKIT", "0"             ' Report parameter: 7
 rpt.SetParam "PRINTBOM", "0"             ' Report parameter: 8
 rpt.SetParam "@SELECTION_CRITERIA", "(({OEORDH.ORDNUMBER} >= """") AND 
     ({OEORDH.ORDNUMBER} <= ""ZZZZZZZZZZZZZZZZZZZZZZ"")) AND (({OEORDH.COMPLETE} = 1) OR 
     ({OEORDH.COMPLETE} = 2)) AND ({OEORDH.TYPE} = 4) AND (({OEORDH.PRINTSTAT} = 1) OR 
     ({OEORDH.PRINTSTAT} = 0) OR ({OEORDH.PRINTSTAT} = -1))"   ' Report parameter: 0
 rpt.NumOfCopies = 1
 rpt.Destination = PD_FILE
 rpt.Format = PF_PDF
 rpt.PrintDir = "c:\temp\quote.pdf"
 rpt.PrintReport
End Sub

 

Basically this technique will silently export to PDF files. The Format member also accepts PF_RTF which will export silently in RTF format. The file they export to is specified in the PrintDir property. If this is a folder, the filename will be the same as the report name, if it’s a filename, it will be that (make sure you get the extension right).

You can also export silently to HTML format by setting the Destination to PD_HTML. For HTML if there is one file then the PrintDir specifies the filename, but sometimes HTML required multiple files, in which case it will use the PrintDir as a folder name.

These are a few ways you can print reports to a file silently without user intervention.

Secret Parameters

In the recorded macro above, you might notice the strange parameter “@SELECTION_CRITERIA”. If you look in the report, there is no such report parameter. Basically our API lets you set report parameters and then print the report. However there are a few other things you might want to do with the Crystal Reports API. Below is a list of these special parameters that might help you get a grip on some more aspects of the Crystal Reports API:

“@SELECTION_CRITERIA”: PESetSelectionFormula (job, asParamValue). This parameter is changed into the API to set the Selection Criteria in the report.

“@SELECTION_CRITERIA_xxxx”:  where  xxxx is the name assigned to the subreport when it was first created. This call will be translated into

job = PEOpenSubreport( parentJob, xxxx)
PESetSelectionFormula (job, asParamValue)

To set the selection criteria in the designated subreport.

“@SELECTION_ADDCRITERIA”: will add the parameter specified to the selection criteria that exists inside the report.

“@TABLENAME”: The parameter value must be in the form:

“table” “name”

Each instance of “table” will be switched to “name” before each PESetNthTableLocation call. This is done for both the main report and all subreports. You must put the table and name in quotes or the parameter will be rejected. Table names are not treated as case sensitive.

“EMAILSENDTO”:   The call will be set the email address when you are using PD_EMAIL.  This will suppress the popping up of the address book dialog.

“EMAILSUBJECT”: The call will be set the email subject when you are using PD_EMAIL.

“EMAILTEXT”: This parameter will be set the email body when you are using PD_EMAIL.

“EMAILPROFILE”:  This parameter will be set the email profile when you are using PD_EMAIL.

“EMAILPROFILEPWD”: This parameter will be set the email password when you are using PD_EMAIL. This password will be used to sign-in to MAPI.

The following are some parameters that if you create them in your report, then the system will automatically set them without you having to do any programming.

“CMPNAME”: The company name.

“ACCPACUSERID”: The Sage 300 User ID.

“ACCPACSESSIONDATE”: The session date from signon.

“ALIGNMENT”: The current alignment option.

“REGIONALFMT” : The current regional format.

Summary

We went through this Crystal API pain once before when Crystal dropped their DLL interface (crpe32.dll). At that point we had to change all API access over to COM. At that point we had similar issues due to various people using the DLL interface and now having to re-code for the COM interface. At that point we did add some functionality to our API, namely we added the TABLENAME special parameter, so that a number of people could start using our API.

Hopefully most people can switch from using the Crystal COM API directly to using the Sage 300 ERP API, however if something is missing, please comment on this blog, so we can consider expanding our API in the future. I’ve already gotten a couple of requests to add silently exporting to Excel format in addition to RTF and PDF. Keep in mind that like we have gone from DLL to COM to .Net, chances are in a few versions, the Crystal API will change again (perhaps to a REST web services API) and we will have to go through this again.

Written by smist08

September 15, 2012 at 7:50 pm

Sage 300 ERP 2012 RTM

with 3 comments

Yes, Sage 300 ERP 2012 has been “Released to Manufacturing”. In a way this is really a “Release to Marketing”, since we don’t really manufacture much anymore, it gets posted for download and then sales and marketing takes over. I think everyone prefers keeping the acronym RTM rather than changing to RTW for “Released to the Web”. I previously summarized all the great things in the release in my Sage 300 ERP 2012 posting.

It’s been a lot of hard work and a tumultuous journey since we release 6.0A at the end of 2010. But we are really happy with the release, it includes many useful new features as well as building on a number of foundations ready for future development.

Now that we are RTM, business partners should be able to start downloading it on Sept. 5 and DVDs should be available by Sept. 18.

Rebranding

Sage Accpac ERP is now release with the new Sage branding and is now Sage 300 ERP 2012. This means we now match the revamped Sage web site and fit in nicely with all the new sales and marketing material. Hopefully now we can fully leverage and build on the Sage brand to ensure people are familiar beforehand with our products.

In addition our editions are changing. It would be confusing to have Sage 300 ERP 200 Edition 2012. So 100, 200 and 500 editions become Standard, Advanced and Premium Editions. Hence something like Sage 300 Advanced ERP 2012.

Manufacturing

When I started with Computer Associates on the original CA-Accpac/2000 project, manufacturing was a much bigger deal than it is today. In those days we produced a boxed product that consisted of floppy disks, printed/bound manuals, many special offer cards and the copy protection dongle all shrink wrapped in plastic.

Back in the 90s we had quite a complicated schedule of when everything had to be submitted so that it could all come together on our release date. For instance manuals took 1 month to get printed, and disks took 1 week to get duplicated and labeled (if we were lucky). So the technical writers had to be finished a month ahead of the programmers. Similarly any included marketing material, as well as the design for the box had to all be submitted quite early.

Back then we released on 3 ½ inch 720K floppy disks (they were actually in hard plastic by this point). Each module took 6 or 7 disks, so you had a stack of disks for System Manager, a stack for General Ledger and so on. Generally a single 720K floppy was quite a bit more expensive than a blank DVD is today.  (In fact the first version of Accpac was released on 8” floppies for the North Star CPM computer, but that was before my time).

After we shipped the gold master floppy disks off to manufacturing, we still had one week to QA while they were being duplicated. We would continue regression testing through the week looking for any more serious issues. If something was found, it was quite expensive, since usually any manufactured floppies were thrown away and new ones were duplicated.

For a while we produced 5 ¼” floppy disks which were available by demand. With version 3.0A we switched entirely to CDs, but we still shipped one module per CD. With CDs it then became practical to provide things like PDF versions of manuals on the CD along with other extras that were impractical on floppy disks.

One thing with having all the modules on separate CDs was that we could stagger the release, so we would release first perhaps SM and the financial modules then the operation modules a few months later and the Payroll modules a few months later still and various other things even later. The end result being that when we first announced RTM on a version, then it would be nearly a year before all the modules, options, integrations, translations, etc. were all fully released.

Now there is only one RTM for a version and this RTM includes everything on one download image (or one or two DVDs). This includes all ERP modules, all documentation, all options products, translations in five languages and all integrations (like CRM and HRMS). So now when we RTM, a customer knows that all Sage components they need are ready and they can go ahead and start the upgrade process. We also work with all our ISVs to try to get their products certified for the new version as quickly as possible.

These days everything is on-line, so the web site needs to be ready to link to the new release and then we provide the download images that are posted there. We still produced a gold master DVD, since people can order these if they want them (for a fee).

Release Cycles

Although not visible outside of development, we also run our release cycles quite differently now than we used to. In the early version all the coding was done first, then when we decided it was code complete we threw it over the wall to QA and went through a long find and fix bugs phase. Generally we shared QA with Sage 50 Canadian (then known as Simply Accounting) and one product was QA’ed while the other was coded.

Now we use an Agile development process and QA is involved from the start and there is no separate development and QA steps. Nothing is considered code complete or done until it is fully QA’ed and documented. Generally this lead to more accurate schedules and higher quality products.

Summary

We are very excited to be releasing Sage 300 ERP 2012. We hope that people upgrade to it and enjoy using it. We are also excited to be starting work on the next version which also looks very exciting.

 

Written by smist08

September 1, 2012 at 5:04 pm

Export

with 9 comments

Introduction

I’ve just returned recently from our Sage Summit 2012 conference where we showcase all the wonderful new features and technologies we are just releasing in new versions or have under development. However in giving various sessions, it was clear that besides evangelizing all the new work going on, that quite a few people aren’t aware of all the functionality that has been in the product all along. So as we develop many new reporting and inquiry tools, I thought it was a good idea to go back and talk about our good old import/export technology. In this article I’ll be concentrating on how to get data out of Sage 300 ERP using it’s built in technology.

You can run Export from pretty well any data entry screen in Sage 300 ERP. You just open the File menu and choose “Export…”. At which point you get a screen like that below:

Note that there is also a File – Export… menu on every Finder in the system.

Formats

The first question you need to answer is what format to export to. Below you can see the various export formats that we support:

Generally you are exporting the data because you know where you want it, usually in another program, so you can choose the format that works best for you. The two most popular formats are “Excel” and “Single File CSV”. Many people know Excel and like to use this as their format. Excel is a great tool for doing data analysis on ERP data with easy ways to generate graphs, pivot tables, etc.

The reason people use “Single CSV File” is that it is easy to deal with from scripting tools and other programs. These are just text files where the values are separated by commas (hence the name). Generally it’s easy to programmatically open these files and parse them. Another benefit of CSV is that it’s such a simple format, that it exports extremely quickly, so for large files this can be a real benefit. Also Excel has no problem reading CSV files.

If you are writing your own programs to process these files, consider XML as most languages, these days, have excellent libraries for parsing and processing XML files.

Another advantage of CSV and XML type files is that Excel has limits on the number of columns and rows that are allowed. This varies by version of Excel. However the CSV and XML file formats don’t have any limitations on them (you just may or may not be able to load them into Excel).

Select Fields/Tables

After you’ve selected the format and the filename to export to, next you check and uncheck all the boxes to select the various fields that you want. This includes the fields in the main table along with any detail tables. Many documents in Sage 300 consist of header records with a number of detail tables so for instance each Order header record has multiple detail records for all the items that make up the order.

A lot of times people just stick to the default of all fields selected, since it doesn’t make a lot of difference whether the extra fields are there or not.

Note that you can right click on the table name to select or un-select all the fields in that table (a bit counter-intuitive since clicking on the table check box doesn’t do anything). You can also rename the table if you want it to appear in the export file with a different name.

Set Criteria

Suppose you export all your customers to Excel, but most of it is taken up by inactive customers that you don’t care about. You could filter these in Excel or you could filter these from Export so they aren’t there getting in the way to start with. Export has a handy “Set Criteria…” dialog where you can define a filter to select the records being exported.

This dialog is an example of “Query by Example” (QBE). You basically specify the field names in the title area of the grid. Then it “and”s items going horizontally and “or”s items vertically. So in the dialog above if I added another item to the right then both would need to be true for the record to be exported. If I added another value below then it would export if the value was either of these. Using this table you can build up fairly sophisticated criteria. If you think better in SQL where clauses, you can hit the “Show Filter” button to see the where clause that you a building.

Load/Save Script

Selecting all those fields and setting the criteria can be a bit of work. Especially if you need to run the same export every day. The solution is to use the “Save Script…” button to save what you’ve done, and then when you return you can use “Load Script…” to get it back. This is the first step in automating the export process.

Exporting From Macros

If you want to automate things further you can drive export from a macro. You can do this with no user intervention whatsoever and have these run on regular basis. Basically you setup all the options you want and save it in a script file. Then from the macro you can execute this script file. Besides the code for this macro, you need to add two references: “Accpac Import/Export 1.0” and “Microsoft XML, v4.0”. The return value from the export operation is an XML file that contains all the details of what happened. I included a bit of code in the macro to parse that XML file to display the number of records exported.

Dim xmlstr, msg As String
 Dim pbstr As String
 Dim doc As New MSXML2.DOMDocument40
On Error GoTo ACCPACErrorHandler ' Set error handler
Set mDBLinkCmpRW = OpenDBLink(DBLINK_COMPANY, DBLINK_FLG_READWRITE)
        Set ie = New ImportExport
        If (Not (ie Is Nothing)) Then
             Dim pStatus As AccpacCOMAPI.tagEventStatus           
            With ie
                ' Open the import/export engine.
                 .Open mDBLinkCmpRW               
                .ExecuteExportScript "c:\temp\custscript.XML", breturn               
                .GetExecuteResult pbstr
                 doc.loadXML (pbstr)
                 doc.setProperty "SelectionLanguage", "XPath"
                 xmlstr = "VIEW[@ID='AR0024']"
                 msg = doc.documentElement.selectSingleNode(xmlstr).Attributes.getNamedItem("Exported").nodeValue
                 MsgBox msg & " record(s) Exported"
                 .Close
             End With  ' ie
        End If
'Cleanup
 Set mDBLinkCmpRW = Nothing
Exit Sub
ACCPACErrorHandler:
   Dim lCount As Long
   Dim lIndex As Long
   If Errors Is Nothing Then
       MsgBox Err.Description
   Else
       lCount = Errors.Count
       If lCount = 0 Then
           MsgBox Err.Description
       Else
           For lIndex = 0 To lCount – 1
               MsgBox Errors.Item(lIndex)
           Next
           Errors.Clear
       End If
       Resume Next
   End If
 End Sub

 

Export from Crystal

Crystal Reports also supports export in quite a few formats. Generally you are choosing if you want to export the formatting or the data or a bit of both. If you want good formatting generally you would export in PDF format (or perhaps one of the Word formats). If you want only the data then export using one of the Excel data-only formats (the ones that don’t say “Data-Only” tend to try to format in Excel which sometimes makes the worksheet hard to work with).

Summary

The built in Export functionality in Sage 300 ERP has been there since version 1.0A but often gets forgotten amid many of the new features. It’s a fairly powerful tool and can solve quite a few data sharing and analysis problems.

Written by smist08

August 25, 2012 at 6:17 pm

Sage Summit 2012

with 7 comments

Introduction

Sage Summit is Sage’s North American conference which was held this year in Nashville, Tennessee at the Gaylord Opryland Resort and Convention Center.  Pascal Houillon the CEO for Sage North America gave the opening keynote along with Himanshu Palsule, CTO. They outlined how the world is changing with the proliferation of mobile devices and how these are changing our lives.  During the keynote, a video was played that showed how all the various mobile connected services Sage is developing could affect a businessperson in their daily life. The conference ran from August 12 to 17 with the first half being for partners and then with the customers joining on Tuesday. There were many announcements, demos, town halls, tutorials, labs and presentations. This blog posting looks at a few of the items that I felt were most significant (at least to me).

Sage City

The customer half of the conference kicked off with a new idea called “Sage City”. This was held in a giant meeting room with a circular stage at the center surrounded by seating for all attendees. Then around this were a number of “villages” that were dedicated to individual industry areas like manufacturing, distribution and accounting. After an initial keynote and introduction, everyone moved to a “village” of their choosing and within the village joined a focus group of 7 or 8 people to discuss common problems and to share and brainstorm solutions to these. Then all these topics and outcomes were written up and posted around the outside of the room. There were two sessions of this separated by a break where drinks and food were brought in. This was a very interesting and innovative networking session and hopefully many good ideas resulted.

Mobile Connected Services

One of the major announcements at the conference was the progress being made on Sage’s mobile connected services initiative. Several shipping mobile connected services were demonstrated along with a number that are currently in development. Below are a couple of screen shots from the Sales Management application that was show during both the partner and customer keynote speeches. This is a native iPad application that communicates back to a cloud service using SData. Note that the applications that I have screen shots here for are in the “experience testing” stage where they are getting a large amount of customer feedback including at the UCD lab at the conference, then after all this testing, a more final form of the product will be specified, so expect these to look quite different when they ship, since they will include all this feedback.

There was also a Service Billing application demonstrated that is written using the Argos SDK and which runs on all smart phones. This application was demonstrated integrated to Sage 50, 100 and 300 ERPs showing how the services charges entered in the mobile application make it back into the ERP as A/R transactions.

Below is a picture of the high level architecture that is being used to develop these applications:

I’ll be going into a lot more detail on what this all means and how it is put together in future blog posts, to explain how we will integrate to on-premise products, how we develop these applications in the Azure cloud and how ISVs can integrate into this platform.

Sage 300 ERP 20120 Release

A very highly featured product at the show is the forthcoming Sage 300 ERP 2012 release. This is coming in September, so there were many sessions highlighting all the new features it contains and several demo stations in the trade show where people can have a look at it. Beta 2 is currently shipping which gives a pretty good look at what this product looks like. If also blogged quite a bit on everything going into the release which is summarized here. Below shows the Sage 300 ERP Desktop driven by the Purchase Order Visual Process Flow rather than the usual tree of icons.

Summary

This was a very quick overview of some of the goings on at Sage Summit. I’m sure these themes along with others will be intertwined in all my blog postings over the coming year. Looking forward to the next Sage Summit 2013 at the Gaylord National Resort and Conference Center in Washington DC.

Developing Windows 8 Style UIs

with 5 comments

Introduction

Microsoft has release Windows 8 to manufacturing with a whole new User Interface technology. Up until a few days ago, this was called Metro, but now Microsoft just dropped that name in favor of “Windows 8 Style UIs”. A bit of a strange name, but full product names rarely just roll off the tongue.

I’ve spent a little time playing with developing “Windows 8 Style UIs” and thought I’d spend this blog post covering some of my experiences. Let’s just call them W8SUs for the rest of this post.

Closed Development System

One of the main goals for this new UI development system is to copy Apple’s success with iOS development and the iTunes store. In the Apple world, you can only develop native iPad and iPhone apps using the Apple SDK on a Mac computer. Further you can only distribute your applications by posting them on the Apple iTunes App store, passing a certification process and in the process allowing Apple to take 30% of the revenue. This has been making Apple billions of dollars and Microsoft would like to emulate that.

You can only develop W8SUs in Visual Studio. VS2012 generates a bunch of proprietary cryptographic code signing info must be there to run. Further you must be signed on with a Windows Live developer account. Another gotcha is that you can only develop for these on Windows 8 (or Windows Server 2012). If you install Visual Studio 2012 on a Windows 7 computer, it won’t install any Windows 8 development components there.

Once you do all this, you can’t just compile your application, zip it up and give it to a friend to run. Like Apple, W8SUs can only be installed via the Microsoft Store. There is an enterprise distribution system to install apps developed for an enterprise across an enterprise, but this again is tightly controlled. Even if you install on another computer via your developer license, it will be time bombed to only work for 1 month.

This is all very new to Windows developers. I’m not entirely sure how it will be received. Apple is successful because of all the revenue their store generates. However most of these are low cost consumer applications. Not sure how this will play out in the enterprise market.

Visual Studio 2012

You can develop these UIs in either JavaScript/HTML or C#/XAML. I chose JavaScript/HTML since that is what I already know. You can use either VS 2010 or 2012, I figured, I may as well go with the newest even though it’s a release preview. Actually VS 2012 worked pretty well. Debugging these applications is fairly easy and the tools are pretty good. Since JavaScript is object oriented more by convention than an enforced part of the language, intellisense has to guess what is valid, and although not always correct, it still does a pretty good job. The only place I found it difficult was when you get an exception as part of an event, and then it can be pretty tricky to find the true culprit, since it usually isn’t part of the call stack.

VS 2012 comes with a set of template to give you a start for your W8SUs. These templates give you a working program with some faked in data. When developing for W8SU in JavaScript/HTML, you need to interact with a number of core operating system components which are either built into the environment by some automatically included references or via some proprietary UI controls. For instance the scrolling ListView that is the hallmark of the opening Start Page is a proprietary control that includes all the standard Win8 interactions. When you are programming in JS, the core of the program consists of handling some proprietary events for the program execution state and call the API to invoke the data binding functions. Once you get away from this you can program individual pages of your application pretty much as standard web apps using standard Web libraries like JQuery or HighChart. Then you string together the page navigation using some proprietary APIs.

So you are doing full real web development with JavaScript/JQuery/HTML/CSS, but you are producing an application that will only run on Windows 8 installed from the Microsoft store. A bit of a strange concept for Web Developers, where the promise was to write once and run anywhere. I think you can structure your program to keep most of it re-usable to generate a Web app version using a separate page navigation system and some sort of alternative to the ListView control.

JavaScript Restrictions

When running under W8SU, you are essentially running under a modified version of IE 10. However there are a number of annoying restrictions compared to running IE 10 regularly. In previous versions of IE, many standard web functions, like parsing XML, were handled with ActiveX controls. Now IE can do many of these things in the standard web way, so it’s better to not use the ActiveX way. So if you try to use an older library that detects you are running under IE and tries to use one of these, then you get a very severe security exception. In general you can’t use any Add-ons or ActiveX controls, included those that used to be built into IE and Windows. I found a work around is to fool libraries to think they are running under Firefox rather than IE and that often gets around the problem.

Plus W8SU removes some features of standard JavaScript that it thinks are “dangerous” for some reason. For instance you can’t use the JavaScript alert and prompt statements. These are banned. This is annoying because again, many libraries will use these for unexpected errors and instead of seeing the error; you get a horrible security exception.

Another annoying thing is that the screen isn’t entirely standard like a standard web page. The page will not scroll, so if your content goes off the side, then it is just truncated, scroll bars are never added to the whole page. If you want scrolling then you need to put your content in a ListView or some other control which then causes other complexities and problems. I’m hoping this is really a bug that gets corrected by the real release.

Some of the controls also seem a bit buggy, which hopefully will be corrected by release. For instance if you put a ListView inside a ListView control, it gets quite confused. Also if you put a proprietary date picker in a ListView control then it ends up read-only.

Since these are based on IE, they use IE’s caching mechanisms. Currently there is no way to clear these caches from the system. The only way is to know the secret folders it uses and to go in and manually delete these. If you clear the cache in IE 10, it has no effect on W8SU programs. This is mostly annoying when doing application development, since re-running the program won’t re-download new static content from your web site. Again hopefully this is fixed by release.

SData

Using SData from a W8SU is really quite easy. There is an API called “WinJS.xhr” which makes asynchronous RESTful web service calls.

    Promise = WinJS.xhr({
       type: "POST",
       url: sdataurl,
       user: "USERID",
       password: "PASSWORD",
       data: atomdata
    });

It has the exact parameters you need for making SData calls. It returns a promise which is W8SU’s way of notifying you when an asynchronous request returns, basically you can set functions to be called if the call succeeds, fails or to get progress. You can also join promises together, so you can make a bunch of asynchronous calls and then wait for them all to finish.

Summary

I think Window’s 8 Style UIs have a lot of potential. I worry they are being rushed to market too quickly and aren’t quite ready for prime time. I also worry that the touch focus is going to turn everyone with a standard laptop or desktop off Windows 8 entirely. Hopefully the technology gets a chance to evolve and that new devices beyond the Surface tablet hit the scene to give it a really good user experience.

Developing for Mobile Devices

with one comment

Introduction

Having just posted a couple of articles on the Argos Mobile SDK here and here; and with the news that Windows 8 has just been released to manufacturing; I thought it might be a good time to reflect a bit on mobile devices and how to develop for them.

By mobile devices we tend to mean smart phones and tablets and not laptop or desktop computers. Certainly there is a lot of blurring around the edges of these categories. Phones becoming as big as tablets, laptops with detachable keyboards and touch screens, etc. There are all sorts of charts showing the growth of mobile devices such as this one from Business Insider:

Today you see iPads, iPhones, and Android devices everywhere. There is a huge market already and from these growth trends you can only see the market demand accelerating in the future. We are already at the point where many workers perform all their computing tasks through a tablet or a phone and may not have access to a desktop or laptop computer at all.

How to develop for mobile devices is a very hot topic around the web. There is a lot of debate around whether to develop using the native SDK’s from the device manufacturers, using a third party toolset that targets multiple devices or writing a web application that runs on anything. What are the pros and cons of all these approaches? What are the tradeoffs you are making when deciding between these?

Device Experience

Apple has done a great job creating the iPhone and iPad and giving them a great user experience. Anyone writing apps that run on these devices want to make their apps as great as any app from Apple when run on one of these. The same goes for creating Android apps or for creating Windows 8 Metro apps. So what are some of the things that you want in your application to fit in naturally into these environments?

  • Follow the look and feel guidelines for the platform. Look and behave like any of the applications that the manufacturer provides. Honor all the touch gestures on the device, have great professional graphics and layout at the right resolution.
  • Integrate with all the build-in hardware on the device. For instance able to access contacts, dial the phone, use the GPS, read the accelerometer, take photos, record sound, receive input via voice, film movies or any other neat hardware feature where ever they make sense.
  • Integrate with the native operating system and utilize all its features. For instance on Windows 8 Metro, support the charms, command bar and integrated application search.
  • Have great performance. Feel just as snappy as any other app on the platform. Don’t hog bandwidth; remember bandwidth can cost money.

Device Differentiators

What distinguishes all these devices? What makes them different? What do you need to support for the best experience?

  • Screen size, we have all sorts of screen sizes from small but high resolution (iPhones with retina displays) to large with low resolution (cheap large screen desktop monitors). Being adaptable is quite a challenge.
  • Input methods. Devices support touch, voice, keyboard, mouse, pen, QR codes, NFC, etc. Supporting all these can be quite a challenge.
  • Different hardware devices. How multi-touch is the device, does it have GPS, does it have a thermometer, is there a sim card, etc.
  • Operating system version. How up to date are most people? Which version do you want to support?
  • Different processor power, battery life and memory.

Using a Web App as a Device App

With all these consideration, is it possible to have a web application pose as a native app? That is what we have been doing with the Argos SDK. The nice thing about Web applications is that they run pretty much anywhere; all the mobile devices have really good browser support. Web applications are also good at adapting to different screen sizes; HTML has always been doing this. The device manufacturers have been good about adding input events to the JavaScript programming model, so you do get notified of touch gestures in your JavaScript application.

However device support is limited. It slowly makes its way into JavaScript, but generally web apps tend to trail native applications in hardware and operating system support. Another key problem is that you can’t post a URL to the app stores like iTunes, these only take native applications.

Enter systems like PhoneGap. These take a web app and wrap it in a native application. In addition to creating a native wrapped app that you can post to the app store, it also adds a lot of hardware abstraction, so you can access things like the accelerometer, but in a way that PhoneGap will make work on all devices.

The Argos SDK is fully PhoneGap compatible, so you can create mobile applications with Argos, compile them with PhoneGap and then deploy them through an app store.

Windows 8

I know Microsoft just dropped the use of the “Metro” name for its native tablet apps, but without a replacement I’m going to just keep calling it Metro. Metro is a subsystem of Windows 8 that allows you to build tablet type apps. Similar to iPhone apps, these can only be distributed via the Microsoft Store (or via a very arcane Enterprise distribution system). The intent of these is to define a new modern UX model for Windows applications. These programs run by themselves and can’t be displayed as Windows in the regular Windows desktop. There are two ways to develop these via Visual Studio 2012, either as C#/XAML apps or as JavaScript/HTML applications. PhoneGap doesn’t have Windows 8 support yet, but I would expect to see it down the road.

If you develop a Metro app in JavaScript/HTML in VS 2012, don’t expect it to run anywhere except in the Metro environment. This is standard JS, but the core of the program structure is proprietary and you have to use a number of proprietary native controls to get proper Windows 8 functionality. All that being said, you can leverage a large number of standard JavaScript libraries such as JQuery or HighChart. You can also structure your program to isolate the proprietary parts to keep as much reusable as possible.

To use operating system you need to be a native application and you have to use proprietary controls to access things like the integrated search and the charms. You can probably simulate the command bar via other means.

If you buy a Windows 8 ARM Processor based device, then it only runs Metro or Web apps, it will not run any other type of application. So if you want to participate in this world then you do need to develop for Metro (or rely on a web app). It will be interesting to see if sales of ARM based Windows 8 devices takes off. Microsoft is releasing their Surface tablet in both ARM and Intel processor based versions.

Right now there is a lot of impedance between current laptops/desktops and Windows 8. Metro is really designed for the next generation of hardware and doesn’t work at all well with current hardware. Perhaps it’s a mistake making it compatible with old hardware since it yields a bad experience, but Microsoft’s hope is that as new hardware comes to market, then the Metro experience will greatly improve.

Summary

It seems that applications need the experience of native applications, but want to leverage the portability of Web apps. After all what developer wants to create 4 or 5 different versions of their program? This is leading to new categories of hybrid applications that have large parts written as web apps, but then merged with parts that are created natively to directly interact with operating system and device features. It certainly leads to programmers needing quite a plethora of skills starting with JavaScript/HTML/CSS for Web Apps and Metro Apps, Objective C for iOS and Java for Android.