Stephen Smith's Blog

Musings on Machine Learning…

Archive for October 2013

Composing Views in the Sage 300 ERP .Net API

with 12 comments

Introduction

In the past couple of weeks we’ve had a look at opening sessions with the Sage 300 ERP .Net API and then opening a View and doing some simple View operations. This week we are going to start looking at the more complicated case of document entry which involves “header/detail” type Views as well as composing multiple Views together. I talked a bit about these concepts in this article on entering O/E Orders via the COM API.

Composing Views

Composing Views is a topic that usually confuses people that are new to using the Sage 300 API. It is a basic mechanism to allow multiple Views to work together with each other as well as to work with the person calling the Views via the API. Roughly for data Views, each View corresponds to a database table. But when we are doing document entry like A/R Invoices or O/E Orders, multiple Views must be used together to build up the document. Further we want to write the document in a single database transaction (we wouldn’t want some tables updated and others not). We also want to ensure that the various Views are used in such a way to avoid multi-user problems like deadlocks.

View composition is the basic mechanism that enables this functionality. It is really a way of sharing View objects, so that everyone is using the same copy rather than opening their own copies. So who is everyone? There is only me the programmer. Well not only do you call Views via the API, but Views call each other all the time. We will look at why they do this in the next section.

Our convention is that the person using the API is responsible for opening all the Views that will be shared. This is just via regular OpenView method calls from a DBLink object. Then we give a reference to all these Views to everyone else via the View’s compose method.

Let’s look at an example for opening and composing the Views required to enter A/R Invoices:

// Open the A/R Invoice Entry Views
arInvoiceBatch = mDBLinkCmpRW.OpenView("AR0031");
arInvoiceHeader = mDBLinkCmpRW.OpenView("AR0032");
arInvoiceDetail = mDBLinkCmpRW.OpenView("AR0033");
arInvoicePaymentSchedules = mDBLinkCmpRW.OpenView("AR0034");
arInvoiceHeaderOptFields = mDBLinkCmpRW.OpenView("AR0402");
arInvoiceDetailOptFields = mDBLinkCmpRW.OpenView("AR0401");


// Compose the Batch, Header and Detail views together.
arInvoiceBatch.Compose(new ACCPAC.Advantage.View[] { arInvoiceHeader });
arInvoiceHeader.Compose(new ACCPAC.Advantage.View[] { arInvoiceBatch, arInvoiceDetail,
      arInvoicePaymentSchedules, arInvoiceHeaderOptFields });
arInvoiceDetail.Compose(new ACCPAC.Advantage.View[] { arInvoiceHeader,
      arInvoiceBatch, arInvoiceDetailOptFields });
arInvoicePaymentSchedules.Compose(new ACCPAC.Advantage.View[] {
      arInvoiceHeader });
arInvoiceHeaderOptFields.Compose(new ACCPAC.Advantage.View[] { arInvoiceHeader });
arInvoiceDetailOptFields.Compose(new ACCPAC.Advantage.View[] { arInvoiceDetail });

To work with A/R Invoices requires using six Views which we open first. Now we as the caller of the API have a View object for each of these. So we can go ahead and make method calls to all these Views. However, these Views need to know about each other so they can work together. There may be fields in the header that the details need to update as they are updated, deleted or inserted, like totals for each invoice. When we save the invoice, it must be able to start a database transaction and save all the other Views as well as itself to work correctly. So basically the compose calls are giving the Views the ability to use the other Views they need that we opened. For instance the first Compose call is the batch.Compose which gives the A/R Invoice Batch View access to the A/R Invoice Header View that we opened. Now the batch View can call the same instance of the header View that we opened and if we look at the fields in the header View we will see the values set by the batch View before things are saved to the disk.

But the code above does seem rather magic. How did I know which Views to open and how did I know which Views to compose? It is very important that you compose the correct Views and the order the Views appear in the array passed into the Compose call is very important. The way I create these, is to run macro recording on the regular UI that uses the document I am programming. Then I get all the correct open and compose calls in VBA syntax which is quite easy to translate into C# and then I have something I know will work. Sometimes you can leave out composite views that aren’t used in a situation, but I find that leads to more problems than its worth, so I generally just include them all. Similarly the ViewDoc and the Application Object Model list all the Views a View will compose with, but I find translating this information into code more tedious.

Header/Detail

Within the Sage 300 system, most accounting documents use a header/detail structure. This means that we have a single header record for the document (like an Invoice) and then multiple detail lines, one for each of the line item detail lines. This gets more complicated since details can have sub-details and so on, but the basic concept is the same. We often are using View composition to get header and detail Views to work together to correctly insert, delete and edit accounting documents.

As an example of header/detail we have the Order Header and then its detail lines. The Order header has information that there is one of for the order. The details then contain data that there are many of. The Order Header contains information like the customer id and the ship-to address. Then each detail line contains an I/C Item number and quantity. So for an order you can order as many items as you like.

The Order Header also stores information like various totals. So as each detail line is added, updated or deleted these header totals have to be updated to keep them correct. Details are always associated with their header by having their primary key start with the header’s primary key, then just adding an uniquifier or counter. This is show in the diagram below:

headerdetail

Sample Program

The sample code for this article is the ARInvEntryWinForms sample on the GDrive here. This sample opens and composes the A/R Invoice Views, then it goes a bit further to create a batch with a single entry for the customer number you enter on the form and then inserts one detail line with a fixed item number (the quantity is defaulted to 1 by the detail View). There is a bit of error handling. In future articles we will talk in detail about the protocol for entering header/detail documents, we will also talk a lot about exception handling and debugging in a future article. But just describing the View composing and header/detail concepts has already lead to a full article. But I did want to include a bit more code than just opening and composing the Views, so it would really do something. You need to go into A/R Invoice Entry to see the invoices created.

Summary

This article covered the basic concepts of View composition and of header/detail relationships. We will start to use these frequently in future articles. These are an important building block to much more sophisticated programs.

Written by smist08

October 27, 2013 at 3:31 am

Starting to Program the Sage 300 ERP Views in .Net

with 20 comments

Introduction

Last time we used the Sage 300 ERP .Net Interface to open a session and create a database link to a Sage 300 ERP company. In this article we will start to investigate how to use the API to manipulate the Sage 300 ERP business logic. The individual business logic objects are known as Views (not to be confused with the Views in MVC or SQL Server Views). For a bit more background on the Views have a look at this article.

These business logic Views represent all the various objects in Sage 300 like G/L Accounts, A/R Customers or O/E Orders. There are also Views for doing processing operations like posting G/L batches or running I/C Day End. The nice thing about these Views is that they all share the same interface and this complete interface is accessible from the .Net API.  Although the API to each view is standard, sometimes you need to use several Views together to accomplish a task and there are about 5 protocols for how you use the Views together to accomplish something. But if you learn the API for one set of Views and learn the 5 protocols then you can do anything in any of the Sage 300 applications from any of the several hundred Views. Additionally you can utilize any Views created by third party ISV solutions.

Since the .Net interface is used by our VB UIs when they are running in the old 5.0A style web deployed, via the .Net Remoting option, you know that the .Net API is capable of performing any task that can be performed by a regular Sage 300 form.

As we proceed we’ll look into the various parts of the API in more detail, but for this article we’ll just look at how to get started and do some basic operations with data views.

Opening a View

To use a View, first we need to open it from the database link (DBLink). Doing this is quite simple:

ACCPAC.Advantage.View arCustView = mDBLinkCmpRW.OpenView("AR0024");

In this case we needed to add the “ACCPAC.Advantage” part to the definition of View, because there is a System.Windows.Forms.View and the compiler needs to know which one we mean. Unfortunately the word View is a bit over used in Computer Science which can lead to some confusion.

But what is this “AR0024” that we are opening? Where did that come from? In the Sage 300 world, all UIs and Views are uniquely identified by what is called a Roto ID which consists of two alphabetic characters followed by four decimal digits. Every Sage 300 SDK application whether written by Sage or an ISV must register a unique two letter prefix for their application with the DPP program. This then guarantees that two SDK modules won’t conflict with each other. Then the developer of the module (in this case A/R) assigns the numbers to all their Views and UIs. Sage’s convention is to start the Views at 0001 and to start the UIs at 1000.

So how do you know what to specify? There are several ways to figure this out.

  1. Use the Sage 300 ERP Application Object Model (AOM). Which is on our Web site here. From this site you can get a list of all Views for all the Sage applications along with any underlying database table structure. Using this site requires using Internet Explorer. You can’t use this for information on ISV applications.
  2. If you have the SDK then you can use the very helpful ViewDoc utility which is part of the SDK application (which you must activate inside Sage 300). A benefit of this is that you can get information on ISV applications that are installed on your system as well.
  3. Use macro recording. If you macro record a UI which uses the View you are after, then the macro recording will record the DBLink OpenView call with the roto view. Just note you need to change the syntax from VBA/COM to C#/.Net (which is fairly easy).
  4. The UI Info tool that is included with the core product can be used, but you need to first get the info on a UI that uses the View then drill down into the View by getting info on the data source.

After calling OpenView, your view object is ready to use, so let’s see some things we can do.

CRUD

CRUD stands for “Create, Read, Update and Delete”. Here we’ll look at reading and updating anyway.

When you open a View there is not data loaded. If we don’t know what record we want, one way to find out is to iterate through all the records or to just read in the first one. Calling GoTop will get the first record.

bool gotOne = arCustView.GoTop();

This function returns a bool to specify true if it returned a record and false if it didn’t. Most of the .Net API functions have simple return codes like this. These are usually the things you want to handle easily programmatically. If something else happens then this function will throw an exception, these could be things like a network connectivity errors or some bad SQL Server index corruption error. Today we’ll just handle the easy cases. In a future article we’ll look more at error handling and what to do when one of these methods throws an exception.

Now let’s iterate through all the records and print out the customer records (assuming the GoTop above was called first).

String custNum;
String custName;

while (gotOne)
{
    custNum = (String) arCustView.Fields.FieldByName("IDCUST").Value;
    custName = (String) arCustView.Fields.FieldByName("NAMECUST").Value;
    Console.WriteLine("Customer Number: " + custNum +
        " Customer Name: " + custName);
    gotOne = arCustView.GoNext();
}

If we got a record then get the customer number and customer name and write them to the console. Inside each View there is a collection of Fields. These usually include the database table fields along with some calculated fields. We’ll look at these in much more detail in a future article. For now this is how you get the various fields from the customer record. How do you know the strings “IDCUST” and “NAMECUST”? You find these the same way you find the Roto ID. The four methods mentioned above will also give you all the fields for each View. We had to cast the result to “String” because the field value is an object. The reason for this is that each field has a type like number, string, date or time and depending on the type will affect the object type. In this case we know these are both strings, so we can just tell the compiler that by setting the cast. If we got this wrong we’ll get an exception when we run. Again the four methods above will give you all the field types as well as some more useful information.

OK so that reads all the records, but what if we know which record we want and just want to read it? This is done as follows:

arCustView.Fields.FieldByName("IDCUST").SetValue("1200", false);
arCustView.Read(false);
custNum = (String) arCustView.Fields.FieldByName("IDCUST").Value;
custName = (String) arCustView.Fields.FieldByName("NAMECUST").Value;
Console.WriteLine("After Read, Customer Number = " + custNum +
     " Customer Name: " + custName);

Here we see how to set the key field for the customer record with the SetValue method of the field. The second parameter is verify which we’ll talk about another time, but for now this is fine set to false. This just determines if we should verify this field right away or wait for later.

Then we call Read to read the record. The parameter lock is set to false, which is nearly always the case (if you set it to true then you will get an exception and an error about needing to be in a transaction which we’ll talk about another time).

Then there is the code to get the field value and print them to the console. A bit of bad programming here with no error checking. Note that this will only work if there is a customer 1200 like in sample data.

Suppose now that we’ve read this record we want to update it? Well that turns out to be quite easy:

arCustView.Fields.FieldByName("NAMECUST").SetValue("Ronald MacDonald", false);
arCustView.Update();

Here we set the field “NAMECUST” to a new value and then call the Update method which has no paramters. You can then run Sage 300 and run the A/R customer’s screen, bring up customer 1200 and see that his name is in fact changed.

Summary

This was a quick introduction to the basics of how to access and use the Business Logic Views in Sage 300 ERP. All the API elements described apply to all the Views in Sage 300 ERP, so learning to manipulate on object goes a long way to proficiently manipulating all objects.

I’ve updated the sample application here as mentioned in this article.

Written by smist08

October 20, 2013 at 12:27 am

An Introduction to the Sage 300 ERP .Net API

with 139 comments

Introduction

I’m planning to write a series of blog articles on the Sage 300 ERP API and in particular on the .Net API that has been part of Sage 300 since version 5.1A. I have a couple of goals with these articles:

  1. Provide a deeper and more complete explanation of many parts of our API. Although all the samples in this series will be in C# and use the .Net API, the ideas will apply to integrating to Sage 300 via any of our other APIs including COM, Java, XAPI, etc.
  2. Explore a number of newer .Net technologies and look at how these can be integrated to Sage 300. These may include things like LINQ and ASP.Net MVC.

The goal is to start simple and then to progress to deeper topics. I’m sure I’ll have blog postings on other topics in-between, but look for this series to continue over time.

I’ll be developing all these projects using the latest C#, Visual Studio and .Net framework. I’ll post the full project source code for anything mentioned. So to start these will be Visual Studio 2012, but I’ll switch to 2013 after it’s released. I’m also currently running Sage 300 ERP 2014, but these examples should work for version 5.6A and later.

Caveats

When you use the Sage 300 .Net interface you can’t share any objects with Sage 300 COM objects or ActiveX screen controls we use in our VB6 programming framework. If you want to use any of the COM/ActiveX controls then you must use the Accpac COM API. When you use the .Net interface you don’t have any interoperability with the other VB6 related technologies. From .Net you do have full access to our COM API via .Net’s COM Interop capability.

Generally this is ok. If you are integrating a .Net product or writing an integration to something like a Web Server then you don’t want any of this anyway. But to just point this out up front. That you cannot use a .Net session object anywhere in a COM/ActiveX API. The COM/ActiveX session object is something that is similar but not interchangeable.

Documentation and Samples

Some versions of Sage 300 have included the help file that contains the .Net interface reference manual. However this isn’t universally available. To help with this I’ve put a copy on Google Drive for anyone that needs it, located here. Note that when you download help files over the Internet, Windows will mark them as dangerous and they may not open, to get around that, have a look at this article.

Of course within Visual Studio, since our API is completely standard .Net, you will get full Intellisense support, so Visual Studio can give you a list of property and method names as you type along with giving you the argument list for the various methods.

I’ll make the projects that are described in this series of blog posts available here. The one for this article is WindowsFormsApplication3 which then reminds me I should fill in the dialog box properly when I create a new project.

There is additional documentation on our DPP Wiki, but you need to be a member of the DPP program to access this.

Project Setup

First you must have Sage 300 ERP installed and you not have de-selected the “Sage 300 ERP .Net Libraries” in the installation program when you installed the product. Then after you create your Visual Studio project (of whatever type it is), you need to add two references: ACCPAC.Advantage.dll and ACCPAC.Advantage.Types.dll. Both of these will have been installed into the .Net GAC. It’s usually worth adding a “using ACCPAC.Advantage;” statement to the beginning of source files that access our API, since it saves a lot of typing of ACCPAC.Advantage.

Now you are ready to use the Sage 300 ERP API. In this first articles I’m going to create a simple .Net Windows Forms C# application, put a button on the form and on click exercise various parts of the API. Nothing exciting will happen but we will print out some output to the console to see whether things are working.

vs1

Creating and Using a Session Object

I talked a lot about creating sessions in this article, mostly from the COM point of view. Now we’ll recap a bit, but from the .Net point of view. The first thing you need to do when using our API is to create and initialize a session object.

using ACCPAC.Advantage;
private Session session;
session = new Session();
session.Init("", "XY", "XY1000", "62A");

At this point you have a session that you can use. For our purposes here session.Init is very important that it is called and nothing else will work until it is called, but the parameters aren’t that important at this point. We’ll come back to them later, but you can just use the ones in the above example for most purposes.

Although we aren’t signed on yet there are still some things we can do with the session object like get a list of the companies, which is something that is required to build a signon dialog box.

foreach ( Organization org in session.Organizations )
{
   Console.WriteLine("Company ID: " + org.ID + " Name: " +
       org.Name + " Type: " + org.Type);
}

Notice that the session’s Organizations property is a standard .Net collection and as a result we can use any C# language support for dealing with collections such as the foreach statement shown. Generally in our API we’ve tried to make sure everything is presented in a standard way for .Net programmers to use. Another very useful collection on the session is the Errors collection which is a collection of all the error messages (including warnings and informational) since it was last cleared. We’ll cover the usefulness of the Errors collection in a future article.

Now let’s actually sign on to Sage 300:

session.Open("ADMIN", "ADMIN", "SAMINC", DateTime.Today, 0);

Basically you need to provide the same sort of information that a user would sign-on to the desktop. Note that although the user id and password are forced to uppercase in the desktop’s sign-on dialog, they are actually case sensitive at the API level, so you must enter these in upper case. Also note that the session date is given as a standard .Net date type.

Now that you are signed on, you can start to do more things. We can now start to access the database. This API was created before we duplicated the system database into the company database, but from the session you can get at some old system database tables like the currencies (which are now read from the copy in the company database). But to really start using the database you need to create a DBLink object from the session object. To do this you specify whether you want a link to the company or system database (which will pretty much always be Company now). Plus you specify some flags, like whether you need read-write or read-only access.

private DBLink mDBLinkCmpRW;
mDBLinkCmpRW = session.OpenDBLink(DBLinkType.Company, DBLinkFlags.ReadWrite);

Now that we have a database link we can directly access some things like the company information, fiscal calendars and the list of active applications for this company. But the main use of this object is to open the Sage 300 ERP business logic objects which we typically call Views.

Summary

So far we’ve only managed to open a session and start using a few basic .Net objects. But these are the starting point to any project so we needed to take a bit of time to set the stage for future articles. Next time we’ll start to create View objects from our DBLink objects and start to explore some View functionality.

If you have any opinions on the coding style, the use of C# or any other questions, please leave a comment for the article and I’ll try to address them.

Written by smist08

October 12, 2013 at 4:57 pm

Moving on to 64-Bit

with 8 comments

Introduction

Long ago we contended with the migration of our Windows programs from 16-Bit to 32-Bit. This was catalyzed by the release of Windows 95 which was the first popular desktop version of Windows at this bitness. At that time everyone jumped on 32-bits because under 16-bits you could only address 64K of RAM and this just wasn’t enough. With 32-bits suddenly we could address 2 gigabytes of RAM. Plus the Windows 32-Bit API fixed many problems in the Windows API and allowed true and proper multi-tasking. At this point Windows had become a true OS. Perhaps we weren’t really happy with Windows until Windows XP, but at least it was starting to be powerful enough for typical usage.

winx64_teaser_0

We’ve started seeing 64-Bit versions of Windows for a while now, but it was really with Windows 7 and Windows Server 2008 that these were finally working well enough to be compelling. Plus now it is quite inexpensive to purchase more the 2Gig or RAM and if you don’t have a 64-Bit operating system then all that extra RAM isn’t going to get used. The nice thing about these 64-Bit versions of Windows is that they run 32-Bit Windows programs extremely well and then each separate 32-Bit program can get up to 2Gig of RAM and you can run a whole bunch of these.

We haven’t seen the same mad rush to 64-Bit applications, like we saw a big rush to 32-bit applications. This is because for most purposes 32-bits is just fine and 2Gig of addressable RAM is plenty. Another reason is that there is no “thunking” mechanism to allow 64-Bit processes call 32-Bit DLLs or vice versa. So your application has to be 100% 64Bit or 100% 32Bit. This then means you need to move your application along with all libraries, add-ons and anything else all to 64-Bit. This can be a logistical problem and is the main reason people still run 32-Bit Internet Explorer and Office.

So what sort of programs benefit from 64 bits? The main ones are server processes like SQL Server. When running on a server with say 64Gig RAM, why shouldn’t SQL Server use 32Gig for cache? And then to efficiently access this it needs to be 64-Bit. Similarly programs like Adobe Photoshop need to handle huge amounts of RAM to do their job and are far more efficient at this as 64 Bit programs.

But what about business applications like Sage 300 ERP? Does it make sense for this to be 64 Bits? For the regular Windows Desktop and for regular data entry screens it doesn’t make any sense. The heavy lifting is done by SQL Server and communicating with 64-Bit SQL server over the network is no problem. For individual screens, they don’t need anywhere near 2Gig or RAM, so they are fine as 32-Bit programs. Then there is no problem with customizations and integrations to other programs, it all continues to work.

The difference is once we start accessing our business logic from a web server. Then we might want hundreds of users using our business logic. Web servers typically run as one process (i.e. an IIS application pool or a Java Tomcat Server) and then all the users run as separate threads inside this process. So if we want to fully exploit the power of a powerful server with lots of RAM and many processors then we need to be a 64-Bit program.

The 64 Bit C/C++ Compiler

For the most part Microsoft left the command line arguments and functioning of the C/C++ compiler the same between 32-Bit and 64-Bit programs. You only need to invoke a different one depending on if you want to compile 32 or 64 bit. If you want to compile for 32 Bit then set your path to “C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin” and if you want to compile for 64 Bit then set your PATH to “C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\bin\amd64”. You also need to ensure you keep your 32-Bit libraries separate from your 64-Bit libraries.

If you are building directly from Visual Studio or using MSBuild then you need to either add or change your configuration to target x64. All in all rather easy.

When you build 32-Bit you can #ifdef using WIN32. Be careful in 64 Bit because both WIN32 and WIN64 are defined. This is because most programs mean not WIN32 to mean WIN16, so strangely this leads to more correct behavior, which you do need to be aware of it.

Building C/C++ Programs

The big differences going to 64 Bit are that pointers are now 64 Bits, you can more easily use 64 Bit integers and structures are now aligned on 8 byte boundaries. All the regular C data types like int and long are still 32 bits in size. So let’s look at some of the things that need dealing with and some of the things that don’t.

When we migrated from 16-Bits to 32-Bits the big killer was the changes to the Windows Proc. This is the callback function that gets called for each Windows or Dialog box message. In 32 Bits Microsoft changed which parameter did what and it caused havoc. They introduced macros to help and everyone had to adopt these macros and then do lots of debugging. I suppose with these macros in place, this should be easier this time around, but actually I don’t care! For this port to 64 Bits I only want to port the API to be used by server processes like web servers. I don’t actually want to port any C based UIs to 64 Bits, so I don’t really care if these Windows Procs work or not. For now I’m just ignoring them entirely. This also gives us a chance to produce a version of System Manager with far fewer DLLs that can easily be installed in a PaaS environment where you need to quickly install your software as a VM boots.

Generally I found that porting to 64 Bit is helped by a good debugger so once you get test programs working 64 Bits its fairly easy to sort out what is going on. Plus when you google on the various errors or functions that are failing, you get lots of good hits.

Some things that cause problems are:

  • Since pointers (and hence handles) are 64-Bit, you cannot store them in longs or DWORDS. The compiler gives you warning about doing this and at some point these will need to be fixed. Superficially these may work, since most of these pointers seem to have the high 32-bits set to 0 on my computer. But you are living on borrowed time.
  • Similarly some lengths like SQLLEN in ODBC are now 64-Bit, this way you can have more than 2Gig records in a SQL table. Basically you have to be careful and use the correct types or you will get many compiler warnings and will overflow if you ever get this many records. Worse if you pass a pointer to an int type then it will overrun and corrupt something else or GPF.
  • 32-Bit windows was fairly efficient, so even though it had a structure packing of 4, most Windows related structures had no empty space, so you didn’t have problems if say you compiled with the –Zp option (which sets structure packing to 1). However in 64 Bits these Windows structures are sets of ints and longs and the structure packing of 8 leaves lots of empty space and is incompatible with –Zp. I ran into problems here with Windows security descriptors; that basically all the functions and macros for dealing with these don’t work under a different packing option.
  • Beware that registry entries are no longer under Wow6432Node, you get to use the real entries now. Also make sure you use the 64-Bit versions of ODBC and RegSvr32.exe. A lot of EXEs that got 32 tacked on the end, now have 64 bit versions, but remain with the same file name.
  • The 64-Bit compiler doesn’t support the _asm keyword, so no inline assembler. We only had a few instances of this that were cut/paste from MSDN for diagnostics like reporting a stack trace when something when wrong.

Summary

Moving to 64 Bits isn’t for everything; however, for running in the context of a Web Server it makes a lot of sense. I think the migration from 32 Bit to 64 Bit is much easier than the migration from 16 Bit to 32 Bit was. The tools are much better these days and the tools behave the same way in 32 or 64 bits. Plus all the work we did to use macros and make things type safe when we went from 16 Bits to 32 Bits now helps with the transition to 64 Bits. The only hard part is that you need to go all 64 Bit and can’t just call 32 Bit DLLs. Now I just wonder how long before we need to re-compile for 128 Bits.

Written by smist08

October 5, 2013 at 3:39 pm