Stephen Smith's Blog

All things Sage ERP…

Posts Tagged ‘Sage

Performance and the Sage 300 Views Part 2

with 2 comments

Introduction

Last week we discussed avoiding table scans when using the Sage 300 ERP APIs. This week we are going to look at some other issues to do with updating data and with processing meta-data.

Last week I showed a cheetah running as an example of performance and speed (the fastest land animal), but this week here she is resting and getting some attention.

cheetah3

AOM/UI Info/ViewDoc

First, just if you are wondering where to find out what indexes a View supports, there are quite a few tools to determine this. Plus you can always look in SQL Management studio, but then you won’t know which index it is by our numbering scheme. Anyway ViewDoc is a good tool that comes with the SDK that gives this information. UI Info comes with System Manager and can drill down through the UI Info to get detailed View Info. Then there is the Sage 300 Application Object Model (AOM) located here. Just note that to use the AOM, you must use Internet Explorer for some obscure reason.

Updating Data

Often if you are manipulating lots of records it’s in a header/detail situation. In this case all the database operations are done when you insert or update the header. The nice things about this is that the Views know a lot about our database API and will do this in an optimal manner so you don’t need to worry about it. Similarly if you delete a header, the View will delete all attendant details for you in an efficient manner.

But suppose you want to update a bunch of records using our .Net API and want to know the most efficient way to do this. Say we want to add something to the end of every A/R Customer Name. Our easy brute force way to do this would be:

arCUS.RecordClear();
while (arCUS.Fetch(false))
{
arCUS.Fields.FieldByName(“NAMECUST”).SetValue(
arCUS.Fields.FieldByName(“NAMECUST”).Value + “A”, false);
arCUS.Update();
}

 

This works but you might find it a bit slow. We can speed it up quite a bit by bracketing the whole thing in a database transaction:

mDBLinkCmpRW.TransactionBegin();
arCUS.RecordClear();
while (arCUS.Fetch(true))
{
arCUS.Fields.FieldByName(“NAMECUST”).SetValue(
arCUS.Fields.FieldByName(“NAMECUST”).Value + “A”, false);
arCUS.Update();
}
mDBLinkCmpRW.TransactionCommit();

 

The times from the sample program (the same one as last week but with a bit added) is:

Time to update all customers: 00:00:00.087
Time to update all customers in a transaction: 00:00:00.038

So putting things in a database transaction helped. This is for Sample Data so there are only a few customers. The updated sample program is the PerformanceSamples project located here (both folder and zip file).

Database Transactions

Generally when using our API you don’t need to worry about database transactions, but occasionally like in the above example they are necessary. In the above example the first method has the side effect that each update is done in a separate transaction. That means you have the overhead of starting and committing a transaction with every record update. In the second example we start a transaction so all the records are committed as a single transaction. Strictly speaking the two examples don’t do the same things, if the first example throws an exception part way through then all the updates done up to that point will be in the database, whereas in the second example they will be discarded since the transaction will be rolled back. This difference can be quite important if there are database integrity issues to consider. Generally Sage 300 ERP uses transactions to go from one state where the database has full integrity to another. This way we can rely on database transactioning to always maintain full database integrity.

There is overhead to setting up and committing a transaction, but there are also resources used for every operation done inside a transaction. At some point the above example will start to slow down if you have too many A/R customers. Generally you might want to commit the transaction every thousand customers or so for optimal performance (but make sure you maintain database integrity along the way).

Also keep in mind that while records are updated in a transaction they will become locked from the point of update through to the end of the transaction, so updating a lot of records in a transaction will lock a lot of records and cause anyone else going to read that record to have to wait until your transaction completes. So try to keep transactions quick. Definitely don’t do any UI type operations in the middle of a transaction (like asking the user a question).

Revisioned Views

Revision List type views will store all insert/updates/deletes in memory until you call Post. Generally these are detail views and you don’t see this functionality because it’s handled by the header. But occasionally you may need to deal with one of these (like perhaps GLAFS). In this case since each Post is a transaction, you just need to be aware of how often you call it as this will have the same effect on performance as mentioned above.

Deleting

Although you can delete records as above just replacing the Update with a Delete call, there is a better way. The Views have a FilterDelete method where you pass in a browse filter and all the records that match will be deleted. This will prove to be quite a bit faster than the above.

Meta-Data

If you run RVSpy with all the View calls selected you will see a lot of meta-data calls, getting information on fields and such. Generally meta-data calls are quite fast and don’t involve going to the database. However if you really go crazy you can slow things down quite a bit. If you make everything dynamic then you could end up making lots of meta-data calls and cumulatively these slow you down a bit. Similarly using constants in things like getting fields are slightly faster than passing field names because you avoid a dictionary lookup (admittedly quite fast but not as fast as direct access). Mostly people exercise good judgement and don’t go too wild driving everything from meta-data, but we have seen some crazy cases.

Summary

Just a quick overview of some performance tips. Hopefully these all help to make your use of the Sage 300 API more efficient.

 

Performance and the Sage 300 Views Part 1

with 3 comments

Introduction

The Sage 300 ERP Views (Business Logic) give you a great deal of power to perform Accounting operations through our various APIs. However as in any programming, performance must always be taken into account. The Sage 300 ERP Views have a lot of features to help you perform operations with good performance, but like anything if they are used incorrectly, performance can be miserable.

This article is going to talk about various features and options that you can take advantage of to improve your application’s performance. As I am writing the article, it’s getting quite long, so I think I’m going to break it into two parts.

cheetah2

Measure and Test

One of the big mistakes people make when performance tuning, is to just make assumptions and changes without doing real measurements. If you have your code in a source control system, first establish a base line for how long something takes, then make you changes and re-measure the time. Only check in your changes if the time is faster, if it isn’t then you are just churning your code and potentially adding bugs. Performance is subtle and often the best ideas and intentions just make a process slower.

Multi-User versus Single-User Performance

This article is about optimizing processes for single users. Often if you want to optimize better multi-user throughput then it’s all about reducing locks and keeping resource usage down. Sometimes these goals align, i.e. 1 person doing something quicker translates to 100 people doing things quicker, sometime they are opposing, i.e. one person can do something way quicker if he takes over all available resources at the detriment to everyone else.

Read-Only versus Read-Write

You can open our database links and views either in read-write mode or read-only mode. Generally if you aren’t updating the data then you want to open in read-only mode as this makes things quite a bit faster. If you might update the data then we have to use more expensive SQL operations so that if you do update the data, the update is fast and multi-user considerations are handled. If you open a table or link read-only then we use much lighter weight SQL operations and the data is returned much quicker. Finders use this to display their data quicker.

FilterSelect/FilterFetch versus Browse/Fetch

When you Browse/Fetch you can always update or delete the record fetched. As mentioned above that can introduce extra overhead and slow things down. Making the table or link read-only will help Browse/Fetch, but perhaps a better method is to use the FilterSelect/FilterFetch methods which are better optimized for SQL Server than Browse/Fetch. The results from these can’t be updated or deleted but at the same time the access method is always light weight whether the link is open read-only or read-write.

Indexes

Sage 300 will always use an index to read data. We have a lot of code to optimize access based on available indexes. If you use the indexes provided your code will be much faster.

For example, suppose you want to know if there are any open G/L Batches. A quick bit of code to do this is:

glBCTL.Browse(“BATCHSTAT=1″, true);
bool isOpenBatch = glBCTL.GoTop();

This works pretty good on sample data, but then you go to a client, suddenly this becomes quite slow. The reason is that since BATCHSTAT isn’t part of the primary index, the GoTop basically goes looking through the Batch table until it reaches the end or finds an open batch. Since open batches are usually at the end, this tends to be sub-optimal. Practically you could speed this up by searching through the table backwards since then you would probably find one quicker, but if there are no open batches you still search the whole table. Fortunately there is a better way. The GLBCTL table has two indexes, one is its primary default index of BATCHID and the other secondary index is on BATCHSTAT and BATCHID (to make it an index without duplicates). So it makes sense to use this index:

glBCTL.Order = 1;
glBCTL.Browse(“BATCHSTAT=1″, true);
isOpenBatch = glBCTL.GoTop();

Simple adding the Order property makes this search much quicker. I included a sample program with timers and the full code. The results on sample data show the speed difference (not that it was all that slow to start with):

Time to determine if there are open batches: 00:00:00.034
Time to determine if there are open batches take 2: 00:00:00.007

The sample program is located here. Its PerformanceSamples one (folder and zip).

So generally you want to use an index that matches the fields that you are searching on as much as possible. Usually having clauses in your browse filter that uses the index segments from left to right will result in the fastest queries.

This example may look a little artificial, but once you get into the operational modules like O/E and P/O this becomes crucial. That is because the main tables like the Order Header have a uniquifier as the primary index. When you want to look something up it’s usually by something like order number and to do this efficiently you have to use an alternate index. So once you are using these modules you will be using alternate indexes a lot. In these modules also be careful that quite a few alternate indexes allow duplicates, so you might get back quite few records unexpectedly.

RVSpy/DBSpy

RVSpy and DBSpy are good tools for identifying bad behavior. The logs contain time information so you can see where the time is being used, but more often than not doing something bad for performance results in a series of operations appearing over and over in these logs. Usually scrolling to the middle of the output file is a good way to see something going awry. You can also use SQLTrace or ODBCTrace, but I find these slightly less useful.

When using RVSpy for this purpose, it helps to turn off logging to a Window (slow) and only log to a file (make sure you specify one). Further choose the View calls you want to log, usually disabling anything to do with meta-data and anything that is field level.

So if you see output like:

[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.58].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.58;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.58].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.59;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.59].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.59;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.59].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.60;t=0;ovh=0] {}
[5b8.7ff.37b0] CS0003: CSCCD    [01:12:06.60].Fetch(view=0x2F1047AC)
[5b8.7ff.37b0] 0 <==[01:12:06.60;t=0;ovh=0] {}

Going on for pages and pages then you have something wrong.

Avoid Table Scans

Most of this article is about avoiding table scans, but just to re-iterate table scans are bad. People are often fooled by testing on sample data. Many of the tables in sample data are quite small and it doesn’t really matter what you do. However in the real world with real customer databases things will usually be quite different. For instance sample data has 9 tax authorities, which you might think is reasonable. But in the USA where any municipal government agency can charge a sales tax, there are over 35,000 tax authorities. If you read all these (like to populate a combo-box to pick one from), then you will run very slowly and your customers will be unhappy.

Summary

Sage 300 ERP has many mechanisms to access and manipulate data efficiently. But as with anything in programming, if you use APIs without due care and attention then performance (and quality in general) will suffer.

Written by smist08

March 10, 2015 at 9:44 pm

Drilldown in Sage 300 ERP

with 2 comments

Introduction

Much accounting detail is entered in one application and passed on to another for recording. Drilldown is the ability to reverse the audit trail and display, application by application, the document back to its original entry into the Sage 300 ERP system. For example, in Sage 300 General Ledger (G/L), you can drilldown from General Ledger Transaction History to the Journal Entry, from the Journal Entry to the originating transaction in Accounts Receivable, and from the Invoice, Credit Note, or Debit Note, to the originating transaction in Order Entry.

The way this works is a bit cryptic in Sage 300 ERP’s database and this blog article will attempt to explain some of the internal workings so that developers and customizers who want to use this data for other purposes can hopefully figure out how to interpret it.

The documentation for the full drilldown infrastructure for third party developers is contained in Appendix L of the SDK’s Programming Guide.

drilldwn

Drilldown Database Fields

The drilldown fields in a document provide a link to the application that created the document. They are done in a generic way so any application (Sage or third party) can provide this information and their screens can be drilled down to. As a result the fields are fairly generic and it’s up to the drilldown target to provide what it needs when it creates the document. There are three fields, one is the source application (our usual two character application id like AP), then a drill down type (each application may have several document types like invoices or receipts), and last there is a generic link field which is a large number where the application packs in whatever it needs to do a link.

For example you can drill down from G/L Journal Entry back to the application that created the Journal. In the GLJEH table there are three fields: DRILAPP, DRILSRCTY, DRILLDWNLK. Suppose P/O creates a Journal Entry, it might populate DRILAPP with “PO”, DRILSRCTY with 3 (for Receipt) then DRILLDWNLK with 1740 (where 1740 is a link to PORCPH1.RCPHSEQ).

This is rather cryptic since these fields are meant to be internal to the application that will be drilled down to. But suppose you want to use these fields for other purpose. Here I’ll give a few examples of how Sage applications use these, which should help for many cases. Plus they will give an indication on how these are built so you can reverse engineer other cases.

Here are the ones used for I/C, O/E and P/O. These are pretty straight forward due to the way data is indexed in these applications. Here are the various types and links used in these applications.

IC:

Receipt: 1: ICREEH.DOCUNIQ
Shipment: 2: ICSHEH.DOCUNIQ
Adjustment: 3: ICADEH.DOCUNIQ
Transfer: 4: ICTREH.DOCUNIQ
Assembly: 5: ICASEN.DOCUNIQ

OE:

Shipment: 3: OESHIH.SHIUNIQ
Invoice: 1: OEINVH.DAYENDNUM
Credit and Debit Note: 2: OECRDH.CRDUNIQ

PO:

Receipt: 3: PORCPH1.RCPHSEQ
Invoice: 5: POINVH1.INVHSEQ
Return: 4: PORETH1.RETHSEQ
Credit Note: 6: POCRNH1.CRNHSEQ
Debit Note: 7: POCRNH1.CRNHSEQ

A/R and A/P are a bit more difficult. Here they have to pack quite a bit of information into that field. A 10-byte BCD can hold up to 18 digits. Into this we want to pack the Posting Sequence Number, Batch Number and Entry Number. The way this works, the first digit is the size of the Posting Sequence Number, then the second digit is the size of the Batch Number. Then you have the Posting Sequence Number, then the Batch Number then the left over is the Entry Number. Since the first two digits are used for sizes, the sum of the lengths of the Posting Sequence Number, Batch Number and Entry Number must be less than or equal to 16.

For instance if the DRILLDWNLK is 222765000000000001 then the length of the Posting Sequence Number is 2 as is the length of the Batch Number. The Posting Sequence Number is 27, the Batch Number is 65 and the Entry is 1.

Drilldown View

Knowing the raw format is fine for some applications. But if you are operating in an environment with access to the Sage 300 Business Logic then you can call the application’s View to interpret this value for you and give it in the format of a UI to run and the parameters to pass it, to get the correct information displayed.

Here we will write a small .Net application that uses the Sage 300 API .Net API to process through the drill down information in the G/L Journal Header and process the A/P drill down information. You can find the project here, it is the Drilldown one.

Each application that supports drilldown has such a view. It is defined in its xx.ini file (in this case ap.ini) in the [setup] section there will be a DrillDownView=aannnn entry which specifies the drill down view (in this case AP0062). In the sample program, I just hard code the View and leave it as an exercise to the reader to generalize and load these from the .INI file.

Basically you use this view by setting the drill down type and link and then calling Process(). This then populates the other fields. This gives you a status field of whether you can drill down on this, a roto id of a UI to run and the parameters to pass the UI. Note that UI parameters are separated by line breaks.

So in this case we run the application we get lines specifying the drill down info followed by the drill down View’s interpretation of it. For instance:

Drill down info in GLJEH: AP 0 223055000000000001

UI Information to run for this: AP2100 MODE=1\nBATCH=55\nENTRY=1

Here is the main part of the code that processes this:

// Cycle through all of GLJEH and printout all the drill down information
 while (true == glJEH.Fetch(false))
 {
        string drillSrce, drillLnk, rotoid, parameters, drillkey, drillInfo;
        int drillType;
        drillSrce = glJEH.Fields.FieldByName("DRILAPP").Value.ToString();
        drillType = Convert.ToInt32(glJEH.Fields.FieldByName("DRILSRCTY").Value.ToString());
        drillLnk = glJEH.Fields.FieldByName("DRILLDWNLK").Value.ToString();
        Console.WriteLine("Drilldown: " + drillSrce + " " + drillType + " " + drillLnk);
        if ( drillSrce.Equals("AP") )
        {
                 apDrill.Fields.FieldByName("SRCETYPE").SetValue(drillType, false);
                 apDrill.Fields.FieldByName("DRILLDWNLK").SetValue(drillLnk, false);
                  apDrill.Process();
                  drillInfo = apDrill.Fields.FieldByName("DRILLTYPE").Value.ToString();
                  rotoid = apDrill.Fields.FieldByName("ROTOID").Value.ToString();
                 parameters = apDrill.Fields.FieldByName("PARAMETERS").Value.ToString();
                 drillkey = apDrill.Fields.FieldByName("DRILLKEY").Value.ToString();
                Console.WriteLine(drillInfo + " " + rotoid + " " + parameters + " " + drillkey);
         }
  }

Summary

Drill down is a useful feature in Sage 300 ERP and hopefully this information helps people leverage the infrastructure for some new interesting customizations and integrations.

Written by smist08

February 27, 2015 at 8:48 am

On Calculating Dashboards

with 4 comments

Introduction

Most modern business applications have some sort of dashboard that displays a number of KPIs when you first sign-in. For instance here area a couple of KPIs from Sage 300 ERP:

s300portal

To be useful, these KPIs can involve quite sophisticated calculations to display relevant information. However users need to have their home page start extremely quickly so they can get on with their work. This article describes various techniques to calculate and present this information quickly. Starting with easy straight forward approaches progressing into more sophisticated methods utilizing the power of the cloud.

Simple Approach

The simplest way to program such a KPI is to leverage any existing calculations (or business logic) in the application and use that to retrieve the data. In the case of Sage 300 ERP this involves using the business logic Views which we’ve discussed in quite a few blog posts.

This usually gives a quick way to get something working, but often doesn’t exactly match what is required or is a bit slow to display.

Optimized Approach

Last week, we looked a bit at using the Sage 300 ERP .Net API to do a general SQL Query which could be used to optimize calculating a KPI. In this case you could construct a SQL statement to do exactly what you need and optimize it nicely in SQL Management Studio. In some cases this will be much faster than the Sage 300 Views, in some cases it won’t be if the business logic already does this.

Incremental Approach

Often KPIs are just sums or consolidations of lots of data. You cloud maintain the KPIs as you generate the data. So for each new batch posted, the KPI values are stored in another table and incrementally updated. Often KPIs are generated from statistics that are maintained as other operations are run. This is a good optimization approach but lacks flexibility since to customize it you need to change the business logic. Plus the more data that needs to be updated during posting will slow down the posting process, annoying the person doing posting.

Caching

As a next step you could cache the calculated values, so if the user has already executed a KPI once today then cache the value, so if they exit the program and then re-enter it then the KPIs can be quickly drawn by retrieving the values from the cache with no SQL or other calculations required.

For a web application like the Sage 300 Portal, rather than cache the data retrieved from the database or calculated, usually it would cache the JSON or XML data returned from the web service call that asked for the data. So when the web page for the KPI makes a request to the server, the cache just gives it the data to return to the browser, no formatting, calculation or anything else required.

Often if the cache lasts one day that is good enough, there can be a manual refresh button to get it recalculated, but mostly the user just needs to wait for the calculation once a day and then things are instant.

The Cloud

In the cloud, it’s quite easy to create virtual machines to run computations for you. It’s also quite easy to take advantage of various Big Data databases for storing large amounts of data (these are often referred to as NoSQL databases).

Cloud Approach

Cloud applications usually don’t calculate things when you ask for them. For instance when you do a Google search, it doesn’t really search anything, it looks up your search in a Big Data database, basically doing a database read that returns the HTML to display. The searching is actually done in the background by agents (or spiders) that are always running, searching the web and adding the data to the Big Data database.

In the cloud it’s pretty common to have lots or running processes that are just calculating things on the off chance someone will ask for it.

So in the above example there could be a process running at night the checks each user’s KPI settings and performs the calculation putting the data in the cache, so that the user gets the data instantly first thing in the morning, and unless they hit the manual refresh button, never wait for any calculations to be performed.

That helps things quite a bit but the user still needs to wait for a SQL query or calculation if they change the settings for their KPI or hits the manual refresh button. A sample KPI configuration screen from Sage 300 is:

s300portalconfig

As you can see from this example there are quite a few different configuration options, but in some sense not a truly rediculous number.

I’ve mentioned “Big Data” a few times in this article but so far all we’ve talked about is caching a few bits of data, but really the number of these being cached won’t be a very large number. Now suppose we calculate all possible values for this setup screen. Use the distributed computing powe of the cloud to do the calculations and then store all the possibilities in a “Big Data” database. This is much larger than we talked about previously, but we are barely scratching the surface of what these databases are meant to handle.

We are using the core functionality of the Big Data database, we are doing reads based on the inputs and returning the JSON (or XML or HTML) to display in the widget. As our cloud grows and we add more and more customers, the number of these will increase greatly, but  the Big Data database will just scale out using more and more servers to perform the work based on the current workload.

Then you can let these run all the time, so the values keep getting updated and even the refresh button (if you bother to keep it), will just get a new value from the Big Data cache. So a SQL query or other calculation is never triggered by a user action ever.

This is the spider/read model. Another would be to sync the application’s SQL database to a NoSQL database that then calculates the KPIs using MapReduce calculations. But this approach tends to be quite inflexible. However it can work if the sync’ing and transformation of the database solves a large number of queries at once. Creating such a database in a manner than the MapReduce queries all run fast is a rather nontrivial undertaking and runs the risk that in the end the MapReduces take too long to calculate. The two methods could also be combined, phase one would be to sync into the NoSQL database, then the spider processes calculate the caches doing the KPI calculations as MapReduce jobs.

This is all a lot of work and a lot of setup, but once in the cloud the customer doesn’t need to worry about any of this, just the vendor and with modern PaaS deployments this can all be automated and scaled easily once its setup correctly (which is a fair amount of work).

Summary

There are lots of techniques to produce/calculate business KPIs quickly. All these techniques are great, but if you have a cloud solution and you want its opening page to display in less that a second, you need more. This is where the power of the cloud can come in to pre-calculate everything so you never need to wait.

Written by smist08

February 14, 2015 at 7:25 pm

The Sage 300 SDLC

leave a comment »

Introduction

A lot of my blog posts are to answer questions that I frequently receive. Then I have a ready answer of a blog posting link, or perhaps people read my blog and it saves me receiving an e-mail. This blog posting is along the same lines as I get asked quite frequently about our SDLC (Software Development Lifecycle). Usually this is in regards to someone trying to fill out a giant RFP full of questions that are mostly irrelevant to purchasing ERP software.

I covered various aspects of our development process in other blog posting which I’ll refer to here. Plus our process is always evolving as we learn from our experiences and try to improve. What I’m writing about here is specifically what the development team for Sage 300 ERP does, but a lot of it is also used by other Sage teams on other projects. There are always slight variations as the different teams do have their own preferences and as long as they follow the general standards that’s ok.

Within R&D we use the Agile Development Methodology, but R&D exists within a larger context within Sage, much of which doesn’t use Agile frameworks. As a result our Agile development has to fit somewhat within a larger non-Agile system that tracks and coordinates the various projects going on around Sage. This is to ensure all departments know what is going on and can plan accordingly.

Not-Agile

We have a general PMO department that tracks all the various projects. It coordinates getting approval for projects, determining release criteria and coordinating all the various departments such as Marketing, Product Management, IS, etc. So they can do their pieces at the appropriate time.

Initial product ideas come from our Innovation Process, but before converting an innovation idea into a larger development effort there is usually some POC work and some initial rough estimates that then lead to a project kickoff process where an initial business plan, along with an initial project plan are presented and either approved or rejected.

The project is then tracked by PMO as it goes through development and then at the end when the Agile part of the development is done, there is a release certification meeting where all the stakeholders get together that the solution is ready for customers. This includes that the software is ready and of a high quality, but also that support is ready, training material is available, back end systems are setup to take orders, marketing is ready and all the other pieces that go into a product launch.

Web_SDLC_GoodPMO

Also at this time we run a final regression to ensure that everything is working correctly. Generally this is only a couple of weeks as a sanity check that the Agile process below worked correctly.

Before making the product available to all customers, we first spend a few months in a controlled release with a handful of live customers to ensure everything works well for them. This not only tests the software, but their ability to be on-boarded, supported and trained. After this has proceeded successfully then the product is made available for everyone.

Some of these reasons for this non-Agile framework is that a number of parts of our organization having adopted Agile yet and eventually they will need to if we are to have truly effective cloud based products. The other reason that I blogged about here, is the need to coordinate many disparate teams working on different parts of a larger solution.

Agile

Within R&D we use the Agile Development Methodology. I’ve blogged about Agile development a number of times: here, here, here, here and here.

We’ve been using Agile programming for a number of years now. We use 2 week sprints, have sprint planning, maintain a backlog, have daily standups, sprint demos, sprint retrospectives and all the other aspects of Agile. We use VersionOne to manage our projects. In Agile you execute the backlog story by story and have very tight rules on what it means for a story to be “done”. This way as each story is completed, it is fully tested, documented and ready for use. The important thing here is to not build up a large list of defects that need to be fixed near the end of the project. Basically when the last story is finished (done) then the product should be ready to put in customer’s hands.

agile

The doneness criteria vary a bit by Agile team, but here are the doneness criteria for a team on one of our current projects:

  • All tasks items in a backlog story have a closed status
  • The code builds successfully without compiler errors and warnings and without ReSharper issues
  • The code is deployed to test environment successfully
  • The code is checked into the correct repository and branch
  • The code conforms to Sage Branding Guidelines.
  • The code performs to performance standards
  • The code is reviewed for applicable use of all documented standards.
  • The code is refactored and reviewed for good OOP implementation
  • The code conforms to UX wireframes, design and CSS guidelines.
  • Code coverage minimum percentage is 70% with a target of +90%
  • The UI displays correctly in all supported browsers (IE, Safari, Firefox, Chrome)
  • Unit tests are included and run successfully
  • Automation tests are included and run successfully in test environment
  • The environment has not been corrupted (database, etc.)
  • The QA and QA Review tasks are included and complete
  • Reviewed and accepted by the Product Owner
  • Implement and document any build and/or deployment changes
  • Knowledge Transfer Activities (wiki updated, code walkthrough, group code review, etc.)
  • Remaining hours for tasks are set to zero and the backlog item is closed

 

Ensuring stories are done is the key to a well-run Agile process, probably more important than a well-structured prioritized backlog.

As part of developing for the cloud, we want to release fairly regularly and can’t afford long manual regression tests. As a result we have a lot of emphasis on Unit Tests and Automated tests, such as those blogged here.

Similarly branching strategy, source code management, build management and continuous integration are also important parts of this process.

Summary

This was a quick overview of our Sage 300 SDLC. With any big project there are always a lot of moving parts and these have to be tracked accurately and reliably. Agile doesn’t mean that we don’t have deadlines or firm requirements. It does mean that we develop the most important things first and build quality in from the very start.

Bangalore Travel Blog Part 1

with 3 comments

Introduction

I’m currently travelling to Bangalore, India to visit a large number of off-shore team members we work with on our projects. So for something different, I thought I’d try travel-blogging. I’ll be writing this blog as I go along and then periodically post my progress. This is my second trip to India, I visited Chennai back in 2008 for ten days.

Sage is partnered with several Indian companies to provide extra capacity for our projects. For this trip I’m visiting Sonata which has teams participating in several important Sage projects. I blogged previously on accelerating projects, which was really talking about our adding capacity through additional teams at Sonata. It’s great to have extra capacity and the ability to get more done, but it’s also a big challenge keeping all the teams moving in the same direction and continuously removing roadblocks and bottlenecks. Our goal is to treat the Sonata teams as if they were regular Sage agile teams with full access to all Sage resources like source control and other internal systems. To make this process work many Sonata folk visit our Richmond office and we have several staff visiting Bangalore. This is my turn.

Indian Visa Process

To back up a bit, travelling to India is a bit more difficult than other places due to the Visa process. To do this I needed letters from Sage and from Sonata giving my reasons for travel and that I’m still being paid by a Canadian company. Then you need to fill out and extremely long on-line visa application that includes detailed questions on yourself, your spouse and your parents. Gathering all the data for this form took several days, and when you save this form, it is quite buggy retrieving what you had before, so you need to check it closely. You also need a US sized passport photo and a Canadian passport with 1 year left (since I wanted a 1 year multi-entry visa). With all this you make an appointment with the company (BLS) that processes the visas. The earliest I could book was 1 week later. Then you show up for your appointment and they double check all your papers and take the rather large fee ($200). They take all this as well as your passport and promise to process it in 7 business days. Basically at this point they give it all to the Indian consulate for processing. Fortunately this all went fine and 5 days later their website said my passport was ready for pickup. Generally of all the countries I’ve visited, this is by far the hardest visa process.

Shots

If you don’t travel much, you should go to a travel clinic to get the right shots when visiting India. I travel quite a bit and all my shots are up to date. The first time I went to India, I needed to get 5 injections. I do recommend taking something like Dukoral since I’ve never had any tummy troubles when I’ve taken this ahead of a trip. Malaria pills may or may not be required depending on where you are going.

It’s a Long Way

I travelled to Bangalore via Hong Kong. It’s a 14 hour flight to Hong Kong and then a 6 hour flight on to Bangalore. Generally the best way to endure a long flight is to sleep through it. The Hong Kong flight left at 2:30pm, so I wasn’t sleepy until near end. By the time I was on the Bangalore flight I was so tired I slept through most of the flight. For some reason all international flights in and out of Bangalore arrive and leave between midnight and 4am. When you arrive you don’t really care what time it is just want to get to the hotel and sleep, so make sure you book for a day earlier, so you don’t need to then wait till 2pm to check in.

Managing in Bangalore

Some Indian cities can be quite hard to navigate. But Bangalore is fairly easy. You do need know how to cross the streets (i.e. make eye contact and walk slowly across the traffic, which will flow around you). If you are worried about having to eat lots of extremely hot Indian food, then don’t worry there are many good restaurants from other cultures like Italian or Mexican. Plus generally I don’t find the Indian food that hot here (perhaps it’s toned down for the tourists). I find the level of English was quite good and haven’t had any problems communicating.

Getting directions and finding your way around isn’t that hard and the area around my hotel (in the downtown old part of the city) seems quite safe. There are quite a few parks around and you can see quite a bit just walking around.

IMG_2864

The office is 10km away and parts of the journey can be quite congested, but I find the traffic here to be better than in Chennai, Bangkok or Ho Chi Minh City. They are building a new elevated metro system which is causing quite a bit of road disruption along the way, but they seem to keep traffic flowing.

Work Environments

The office building is located in the Global Village Tech Park outside the city. There are quite a few tech companies located here including Sonata, MindTree, Accenture, HP and Texas Instruments. The office environment I’m at is quite nice. The building is modern and the work environment is quite pleasant. It uses an open office concept and provides a nice productive team environment.

IMG_0701

Since this is India the company parking is quite different than what you would expect in North America. To maneuver quickly through traffic, two wheels is the way to go. If everyone using motorbikes was to switch to cars it would be total grid lock here.

IMG_0703

Conclusion

This ends part 1 of my travel to India. I’m now here and settled in. Getting here is half the battle, now it’s time to get some productive work done.

Written by smist08

November 4, 2014 at 12:28 pm

Posted in Business

Tagged with , , , ,

Follow

Get every new post delivered to your Inbox.

Join 293 other followers