Stephen Smith's Blog

Musings on Machine Learning…

Archive for the ‘Uncategorized’ Category

5 Best Word Processors for Writers

leave a comment »

The Write Cup

By Jeff Hortobagyi, Cathalynn Cindy Labonte-Smith, Elizabeth Rains & Stephen Smith

Introduction

The market has remained fairly static for word-processors since MS Word was released as Multi-Tool Word Version 1.0 in 1983, it’s dominated the word-processing market. Its main competitor became WordPerfect, but that program soon fizzled out and became a minor player. Although, there are loyal WordPerfect users out there and there’s a WordPerfect Office Professional 2020 suite available, but at over $500 it’s out-priced itself out of the market. MS Word remains the heavy hitter in the word-processing world and it’s affordable for $6.40/month for the entire MS Office package, but an increasing number of free apps keep driving down its price.

In 2006, Google Docs came along and changed the way people worked. No longer did authors need to print out and make paper copies to make redlines. No longer did they need to attach large…

View original post 4,039 more words

Written by smist08

July 10, 2020 at 9:10 pm

Posted in Uncategorized

Coffee in the Age of Social Distancing

leave a comment »

The Write Cup

By Stephen Smith

Introduction

Here in British Columbia, Canada, COVID-19 restrictions are slowly being relaxed. As they are relaxed, coffee shops are scrambling to re-open while meeting the various government regulations for social distancing and cleaning. In this article I’ll discuss the various setups and trade-offs various shops are taking.

Inside vs Outside Seating

It is far easier for coffee shops to offer outside patio seating than providing inside seating. In both cases social distancing is required and the tables have to be measured to ensure they are sufficiently separated. Many coffee shops don’t have enough room for any inside seating and they have to keep the people in the counter lineup sufficiently separated. Often setting up the counter lineup takes all their inside floor space. Some have an indoor and snaky lineup to the counter, the pickup area and then an exit door.

BEWARE! None of the washrooms are…

View original post 528 more words

Written by smist08

May 29, 2020 at 1:40 pm

Posted in Uncategorized

Tools for the Mobile Writer

leave a comment »

The Write Cup

By Stephen Smith

Introduction

Most writers work from home, or a coffee shop to get a change of scenery. More adventurous writers regularly travel to remote places, to say write while lying on an isolated tropical beach. This article covers a number of tools and techniques to stay connected and to collaborate with other writers, editors or publishers.

Staying on the Internet

All modern tools require that you’re connected to the Internet. I blogged on how to stay safe when using coffee shop wifi, but what if there isn’t any wifi to connect to? The best alternative to wifi is the cellular network.

Most cell phone plans include enough data for writers. If you are travelling in a foreign country, consider getting a local SIM card and plan for your phone. This is far cheaper than paying the roaming charges to your home provider.

A few writers compose and…

View original post 592 more words

Written by smist08

March 7, 2020 at 5:12 pm

Posted in Uncategorized

The Facts About Coffee Shop Wifi

leave a comment »

My guest post on the Write Cup blog.

The Write Cup

By Stephen Smith

Is Coffee Shop Wifi Safe?

Many people use coffee shops as their offices or workspaces, all using the free Wifi to get their work done. Wifi can be a competitive differentiator for a shop. If a joint has good Wifi, writers and home office workers will flock to it. A few shops don’t have Wifi to discourage people staying too long, but then they better have great coffee to compensate. People worry this isn’t safe and you will get hacked. Some buy expensive security solutions to protect themselves. Is using coffee shop Wifi really dangerous? Are you going to get hacked using Starbucks’ Wifi? In this article we’ll look at how safe it is and provide some tips to stay secure.

Remember when Internet cafes charged by the minute? Wifi on cruise ships are prohibitively expensive, so better to wait until you’re in port and find a…

View original post 613 more words

Written by smist08

February 28, 2020 at 11:58 am

Posted in Uncategorized

Open Source Photography Toolkit

leave a comment »

Introduction

Since retiring, I’ve switched to entirely running open source software. For photography, Adobe Photoshop and Lightroom dominate the scene. Most articles and books are based on these products. The Adobe products have a reputation for being very good, but they are quite expensive, especially since they have switched to a subscription model of pricing. In this article I’m going to talk about the excellent open source programs that work very well in this space.

Basically there are two streams here, the quicker and easier software equivalent to Adobe Lightroom and then the more technical and sophisticated software equivalent to Adobe Photoshop.

I run all these programs on Ubuntu Linux, however they all have versions for the Mac and Windows.

You can download the source code for any open source program and have a look at how the programs work. If you find a bug, you can report it, or if you are a programmer you can fix it. Figuring out enough of a program to work on it is a large undertaking, but I feel comforted that that avenue is open to me if I need it.

digiKam

digiKam is an open source photo management program similar to Adobe’s Lightroom. It is easier to use than a full photo editing tool like GIMP or Adobe Photoshop, and has tools to automate the processing of the large number of photos taken in a typical shoot. It has the ability to import all the photos from raw format for further processing, it has a pretty good image editor built in and then lots of tools for managing your photos, like putting them in albums, assigning keywords, and editing the meta-data. There is an extensive search tool, so you can find your photos again if you forgot where you put them. There are tools to publish your photos to various photography websites as well as various social media websites.

screenshot from 2019-01-05 11-57-36

Unlike Lightroom, there aren’t nearly as many books or tutorials on the product. I only see one book on Amazon. However the web based manual for digiKam is pretty good and I find it more than enough. It does peter out near the end, but most of the things that are TBD are also easy to figure out (mostly missing the specifics of various integrations with third party web sites).

Another difference is that digiKam does actually edit your pictures and doesn’t just store differences like LR does, so you need to be aware of that in your management workflows.

Lightroom costs $9.99/month and is subscription based. digiKam is free. One benefit is you don’t have to worry about having your photos held hostage if you get tired of paying month after month. Especially if you are an infrequent user.

GIMP

GIMP is very powerful photo-editing software. It is an open source equivalent of Adobe Photoshop. I recently saw a presentation by an author of a book on Photoshop on his workflow for editing photos with Photoshop. I was able to go home and perform the exact same workflows in GIMP without any problems. These involved a lot of use of layers and masks, both of which are well supported in GIMP.

screenshot from 2019-01-05 12-10-31

Both Photoshop and GIMP are criticised for being hard to use, but they are the real power tools for photo editing and are both well worth the learning curve to become proficient. There are actually quite a few good books on GIMP as well as many YouTube tutorials on the basic editing tasks.

For 90% of your needs, you can probably use digiKam or Lightroom. But for the really difficult editing jobs you need a tool like this.

Photoshop typically costs $20/month on a subscription basis. GIMP is free.

RawTherapee

GIMP doesn’t have the ability built in to read raw image files. There are plug-ins hat you can install, but I’ve not gotten good results with these, often they work stand-alone, but not from within GIMP. digiKam can process raw files, and doing that en-mass is one of its main features.

screenshot from 2019-01-05 14-02-19

Sometimes you want a lot of control of the process when you do this processing. This is where RawTherapee comes in. It is a very sophisticated conversion program. It supports batch processing and has very sophisticated color processing.

Often in the open source world, components are broken out separately rather than bundled into one giant program. This provides more flexibility to mix and match software and allows the development teams to concentrate on what they are really good at.

Typically you would take all your pictures in your camera’s raw mode, convert these to a lossless file format like TIFF and then do your photo editing in GIMP. This is the harder, but more powerful route as opposed to using digiKam for the entire workflow.

OpenShot

OpenShot is actually movie editing software. I included it here, because many photographers like to create slideshows of their work, where the images have nice transitions and change from image to image with the music. OpenShot is an ideal open source program for doing this. If you have a Mac, then you can use iMovie for this, but if you don’t have a Mac or what something that works on any computer then OpenShot is a good choice.

screenshot from 2019-01-05 14-08-30

Summary

There are good open source pieces of software that are very competitive with the expense commercial software products. Adobe has a near monopoly in the commercial space and tries to squeeze every dime it can out of you. It’s nice that there is a complete suite of alternatives. I only use open source software for my photography, and have find it to easily fill all my needs.

This article only talks about four pieces of software. There are actually many more specialized applications out there that you can easily find by googling. Chances are if you look below the ads in your Google search results, you will find some good free open source software that will do the job for you.

 

Written by smist08

January 5, 2019 at 10:29 pm

TensorFlow on the Raspberry Pi and Beyond

with one comment

Introduction

You’ve been able to use TensorFlow on a Raspberry Pi for a while, but you’ve had to build it yourself. With TensorFlow 1.9, Google added native support, so you can just use pip3 to install precompiled binaries and be up and running in no time. Although you can do this, general TensorFlow usage on the Raspberry Pi is slow. In this article I’ll talk about some challenges to running TensorFlow on the Raspberry Pi and look at some useful cases that do work. I’ll also compare some operations against my Intel i3 based laptop and the rather beefy servers available through Google’s subsidiary Kaggle.

Installing TensorFlow on a Pi

I saw the press release about how easy it was to install TensorFlow on a Raspberry Pi, so I read the TensorFlow install page for the Pi, checked the prerequisites, and followed the instructions. All I got was strange unhelpful error messages about how there was no package for my version of Python. The claim on the TensorFlow web page is that Python 3.4 or greater is required and I was running 3.4.2, so all should be good. I installed all the prerequisites and dependencies from the TensorFlow script and those all worked, including TensorBoard. But no luck with TensorFlow itself.

After a bit of research, it appeared that the newest version of Raspbian is Stretch, but I was running Jessie. I had assumed that since my operating system was updating that it would have installed any newer version of Raspbian. That turns out to not be true. The Raspberry people were worried about breaking things, so didn’t provide an automatic upgrade path. Their recommendation is to just install a new image on a new SD card. I could have done that, but I found instructions on the web on how to upgrade from Jessie to Stretch. I followed the instructions available here, and it all worked fine.

To me, this is really annoying since I wasted quite a bit of time on this. I don’t understand why Raspbian didn’t at least ask if I wanted to upgrade to Stretch offering the risks and trade-offs. At any rate now I know, not to trust “sudo apt-get dist-upgrade”, it doesn’t necessarily do what it claims.

After I upgraded to Stretch, doing a “sudo pip3 install TensorFlow” worked quickly and I was up and running.

Giving TensorFlow a Run

To try out TensorFlow on my Raspberry Pi, I just copied the first TensorFlow tutorial into IDLE (the Python IDE) and gave it a run.

import tensorflow as tf
mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(512, activation=tf.nn.relu),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)

This tutorial example trains on the MNINST dataset which is a set of handwritten digits and then evaluates the test set to see how accurate the model is. This little sample typically achieves 98% accuracy in identifying the digits. The dataset has 60,000 images for training and then 10,000 for testing.

I set this running on the Raspberry Pi and it was still running hours later when I went to bed. My laptop ran this same tutorial in just a few minutes. The first time you run the program, it downloads the test data, on the Pi this was very slow. After that it seems to be cached locally.

Benchmarking

To compare performance, I’ll look at a few different factors. The tutorial program really has three parts:

  1. Downloading the training and test data into memory (from the local cache)
  2. Training the model
  3. Evaluating the test data

Then I’ll compare the Raspberry Pi to my laptop and the Kaggle virtual environment, both with and without GPU acceleration.

 

Load Time Fit Time Eval Time
Raspberry Pi 3.6 630 4.7
I3 Laptop 0.6 95 0.5
Kaggle w/o GPU 1.7 68 0.6
Kaggle with GPU 1.1 44 0.6

 

Keep in mind that my Raspberry Pi is only a 3 and not the newer slightly faster 3 B+. The GPU in the Kaggle environment is the NVIDIA Tesla K80. The server is fairly beefy with 16GB of RAM. The Kaggle environment is virtual and shared, so performance does vary depending on how much is going on from other users.

Results

As you can see the Raspberry Pi is very slow fitting a model. The MNINST data is fairly compact as these things go and represents a relatively small data set. If you want to fit a model and only have a Raspberry Pi, I would recommend doing it in a Kaggle environment from an Internet browser. After all it is free.

I think the big problem is that the Raspberry Pi only has 1Gig of RAM and will be swapping to the SD Card which isn’t the greatest in performance. My laptop has 4Gig RAM and a good SSD Hard Drive. I suspect these are more key than comparing the Intel i3 to the ARM Cortex processor.

So why would you want TensorFlow on the Raspberry Pi then? The usage would be to run pre-trained models for specific applications. For instance perhaps you would want to make a smart door camera. The camera could be hooked up to a Raspberry Pi and then a TensorFlow image recognition model could be run to determine if someone approaching the door should be admitted, and if so, send a signal from a GPIO pin to unlock the door.

From above you might think that evaluation is still too slow on a Raspberry Pi. However, x_test which we are evaluating actually contains 10,000 test images. So performing 10,000 image evaluations in under 5 seconds is actually pretty decent.

A good procedure would be to train the models on a more powerful computer or in the cloud, then run the model on the Pi to create some sort of smart device utilizing the Pi’s great I/O capabilities.

Summary

The Raspberry Pi with its great DIY interface abilities combined with its ability to run advanced machine learning AI applications provides a great platform to develop smart devices. I look forward to seeing all sorts of new smart projects appearing on the various Raspberry Pi project boards.

Written by smist08

August 17, 2018 at 12:09 am

Posted in Uncategorized

Sage Connect 2016

with 4 comments

Introduction

The Sage Connect 2016 conference has just wrapped up in Sydney, Australia. I was very happy to be able to head over there and give a one-day training class on our new Web UIs SDK, and then give a few sessions in the main conferences. This year the conference combined all the Sage Australia/New Zealand/Pacific Islands products into one show. So there were customers and partners from Sage HandiSoft, Sage MicrOpay, Sage One as well as the usual people from Sage CRM, Sage 300, Sage CRE and Sage X3.

The show was on for two days where the first day was for customers and partners and then the second day was for partners only. As a result, the first day had around 600 people in attendance. There was a networking event for everyone at the end of the first day and then a gala awards dinner for the partners after the second day.

A notable part of the keynote was the kick-off of the Sage Foundation in Australia with a sponsorship of Orange Sky Laundry. Certainly a worthwhile cause that is doing a lot of good work helping Australia’s homeless population.

There was a leadership forum featuring three prominent Australian entrepreneurs discussing their careers and providing advice based on their experience. These were Naomi Simpson of Red Balloon, Brad Smith of Braaap Motorcycles and Steve Vamos of Telstra. I found Brad Smith especially interesting as he created a motorcycle manufacturer from scratch.

The event was held at the conference center at the Australian Technology Park. This was very interesting since it was converted from the Eveleigh Railway Workshops and still contains many exhibits and equipment from that era. It created an interesting contrast of 2016 era high tech to the heavy industry that was high tech around 1900.

Sage 300

The big news for Sage 300 was the continued roll out of our Web UIs. With the Sage 300 2016.1 release just being rolled out this adds the I/C, O/E and P/O screens along with quite a few other screens and quite a few other enhancements. Jaqueline Li, the Product Manager for Sage 300 was also at the show and presented the roadmap for what customers and partners can expect in the next release as well.

Sage is big on promoting the golden triangle of Accounting, Payments and Payroll. In Australia this is represented by Sage 300, Sage Payment Solutions and Sage MicrOPay which all integrate to complete the triangle for the customers. Sage Payment Solutions (SPS) is the same one as in North American and now operates in the USA, Canada and Australia.

Don Thomson one of the original founders of Accpac and the developer of the Access-C compiler was present representing his current venture TaiRox. Here he is being interviewed by Mike Lorge, the Managing Director Sage Business Solutions, on the direction of Sage 300 during one of the keynote sessions.

donthom

Development Partners

Sage 300 has a large community of ISVs that provide specialized vertical Accounting modules, reporting tools, utilities and customized solutions. These solutions have been instrumental in making Sage 300 a successful product and a successful platform for business applications. Without these company’s relentless passionate support, Sage 300 wouldn’t have anywhere near the market share it has today.

There were quite a few exhibiting at the Connect conference as well as providing pre-conference training and conference sessions. Some of the participants were: Altec, Accu-Dart, AutoSimply, BSP Software, Dingosoft, Enabling, Greytrix, HighJump, InfoCentral, Orchid, Pacific Technologies, Symphony, TaiRox and Technisoft.

exhibis

I gave a pre-conference SDK training class on our new Web UIs, so hopefully we will be seeing some Web versions of these products shortly.

Summary

It’s a long flight from Vancouver to Sydney, but at least it’s a direct flight. The time zone difference is 19 hours ahead, so you feel it as 5 hours back which isn’t too bad. Going from Canadian winter to Australian summer is always enjoyable to get some sunshine and feel the warmth. Sydney was hopping with tourist season in full swing, multiple cruise ships docked in the harbor, Chinese new year celebrations in full swing and all sorts of other events going on.

The conference went really well, and was exciting and energizing. Hopefully everyone learned something and became more excited about what we have today and what is coming down the road.

Of course you can’t visit Australia without going to the beach, so here is one last photo, in this case of Bondi Beach. Surf’s up!

bondi

Written by smist08

February 25, 2016 at 2:46 am

Adding Your Application to the Home Page

with 6 comments

Introduction

We’ve been talking about how to develop Web UIs using our beta SDK. But so far we’ve only been running these in Visual Studio, we haven’t talked about how to deploy them in a production system. With this article we’ll discuss how to add your menus to the home page. Which files need to be installed where and a few configuration notes to observe.

We’ll continue playing with Project and Job Costing, so first we’ll add PJC on to the end of the More menu in the Home Page:

pjc1

As part of this we’ll build and deploy the PJC application so that we can run its UIs in a deployed environment, rather than just running the screens individually in Visual Studio like we have been.

pjc2

The Code Generation Wizard

When you create your solution, you get a starting skeleton Sage.PM.BusinessRepository/Menu/PMMenuModuleHelper.cs. I’m using PM since I’m playing at creating PJC Web UIs, but instead of PM you will get whatever your application’s two letter prefix is. If you don’t have such a prefix, remember to register one with Sage to avoid conflicts with other Sage 300 SDK developers. Similarly, I use Sage as my company, but in reality this will be whatever company name you specified when you created the solution. This MenuModuleHelper.cs file specifies the name of the XML file that specifies your application’s Sage 300 Home Page menu structure. This C# source file is also where you put code to dynamically hide and show your program menu items, so for instance if you have some multi-currency only UIs then this is where you would put the code to hide them in the case of a single currency database (or application).

The solution wizard creates a starting PMMenuDetails.xml file in the root of the Sage.Web project. Then each time you run the code generation wizard it will add another item for the UI just generated. This will produce rather a flat structure so you need to polish this a bit as well as fix up the strings in the generated MenuResx.resx file in the Sage.PM.Resources project. This resource file contains all the strings that are displayed in the menu. Further you can optionally update all the generated files for the other supported languages.

One caveat on the MenuDetails.xml file is that you must give a security resource that the user has rights to or nothing will display. Leaving this out or putting N/A won’t work. One good comparison is that since these are XML files you can see all of Sage’s MenuDetails.xml files by looking in the Sage 300\Online\Web\App_Data\MenuDetail folder. Note that the way the customize screen works, it removes items and puts them in a company folder under these. It will regenerate them if the file changes, but if you have troubles you might try clearing these folders to force them to be regenerated.

Below is a sample XML element for a single UI to give a bit of flavor of what the MenuDetails.xml file contains.

 

  <item>
    <MenuID>PM4001</MenuID>
    <MenuName>PM4001</MenuName>
    <ResourceKey>PMCostType</ResourceKey>
    <ParentMenuID>PM2000</ParentMenuID>
    <IsGroupHeader>false</IsGroupHeader>
    <ScreenURL>PM/CostType</ScreenURL>
    <MenuItemLevel>4</MenuItemLevel>
    <MenuItemOrder>2</MenuItemOrder>
    <ColOrder>1</ColOrder>
    <SecurityResourceKey>PMCostType</SecurityResourceKey>
    <IsReport>false</IsReport>
    <IsActive>true</IsActive>
    <IsGroupEnd>false</IsGroupEnd>
    <IsWidget>false</IsWidget>
    <Isintelligence>false</Isintelligence>
    <ModuleName>PM</ModuleName>
  </item>

Post Build Utility

Now that we have our menu defined and our application screens running individually in debug mode inside Visual Studio, how do we deploy it to run inside IIS as a part of the Sage 300 system? Which DLLs need to be copied, which configuration files need to be copied and where do they all go? To try these steps, make sure you have the latest version of the Sage 300 SDK Wizards and the matching newest beta build.

The Wizard adds a post build event to the Web project that will deploy all the right files to the local Sage 300 running in IIS. The MergeISVProject.exe utility can also be run standalone outside of VS, its a handy mechanism to copy your files. Its usually a good idea to restart IIS before testing this way to ensure all the new files are loaded.

postbuild1

This utility basically copies the following files to places under the Sage300\online\web folder:

  • xml is the configuration file which defines your application to Sage 300. Think of this as like roto.dat for the Web. It defines which are your DLLs to load using unity dependency injection when we start up.
  • App_Data\MenuDetail\PMMenuDetails.xml is your menu definition that we talked about earlier.
  • Areas\PM\*.* area all your Razor Views and their accompanying JavaScript. Basically anything that needs to go to the Browser.
  • Bin\Sage.PM.*.dll and Bin\Sage.Web.DLL are the DLLs that you need to run. (Keep in mind that I’m using Sage for my company name, you will get whatever your company is instead of Sage in these).

With these in place your application is ready to roll.

Update 2016/01/20: This tool was updated to support compiled razor view and the command line is now:

Call "$(ProjectDir)MergeISVProject.exe" "$(SolutionPath)"  "$(ProjectDir)\"
     {ModuleName}MenuDetails.xml $(ConfigurationName) $(FrameworkDir)

Plus it is only run when the solution (not an individual project) is set for a “Release” build.

Compiled Views

When we ship Sage 300, all our Razor Views are pre-compiled. This means the screens start much faster. If you don’t compile them, the when first accessed, IIS needs to invoke the C# compiler to compile them and we’ve found that this process can take ten seconds or so. Plus, the results are cached and the cache is reset every few days causing this to have to happen over again. Another plus is that when pre-compiled the Views can’t easily be edited, which means people can’t arbitrarily change these causing a problem for future upgrades.

Strangely Visual Studio doesn’t have a dialog where you can set whether you want your Views pre-compiled, you have to edit the Sage.Web.csproj file directly. And you need to change the XML value:

<MvcBuildViews>false</MvcBuildViews>

Between true and false yourself in a text editor.

The Sage 300 system is set so that it only runs compiled Razor Views. If you do want to run un-compiled Razor Views, then you need to edit Sage300\online\web\precompiledapp.config and change updatable from false to true.

Beta note: As I write this the MergeISVProject utility doesn’t copy compiled Views properly. This will be fixed shortly, but in the meantime if you want to test with compiled Views you need to copy these over by hand.

New Beta note: This tool now fully supports compiled razor views.

Beta note: The previous beta wouldn’t successfully compile if you were set to use compiled Views, this has been fixed and the solution wizard now includes all the references necessary to do this.

Summary

This article was meant to give a quick overview of the steps to take once you have a screen working in Visual Studio debug mode, to being able to test it running in the Sage 300 Home Page as part of a proper deployment. Generally, the tools help a lot and hopefully you don’t find this process too difficult.

 

Written by smist08

December 29, 2015 at 4:26 pm

Sage 300 Web UI SDK – Adding JavaScript

with one comment

Introduction

Last week we talked about adding UI controls to our Web UI project. In the early days of the Web, this would be all you needed to do. The user would just enter the data and hit save (or submit). However modern Web applications are expected to be more responsive and interactive. This is achieved by adding JavaScript code which can perform local processing or make calls to the server for additional data or to perform additional actions. Most modern Web applications contain quite a bit of JavaScript and achieve quite a high degree of user interaction.

jscalm

In our source code for the new Web UIs, it turns out that 80% of the application code is C# and 20% is JavaScript. Although it is only 20%, this is still a lot of code and care needs to be taken with it. Some programmers really love JavaScript, some really hate it. JavaScript is more of a free form interpreted language that is very forgiving. This allows a lot more freedom in how you structure your programs. Debugging and writing JavaScript is quick and easy since there is no big build/compile step. On the other hand if you capitalize or spell something wrong, there is no compiler to catch the mistake and debugging these errors can be quite frustrating.

We use JQuery to isolate ourselves from Browser differences, for the most part with newer versions of all the Browsers the differences in dialects of JavaScript is much smaller than it used to be. But it still has to be taken into account. Usually the differences manifest themselves in error handling and security concerns. Sometimes one Browser might ignore an error and keep on going whereas another Browser will throw an exception. Similarly, as some practices that lead to security problems are tightened, the implementation can be a bit uneven across the Browsers. Fortunately, most of our application programming doesn’t hit these cases, but it does highlight one the importance of testing in all the supported Browsers.

Another good thing about JavaScript is that tools have all gotten better. You can debug your code in Visual Studio while it runs in Internet Explorer quite seamlessly with debugging you C# code. Further the other Browser all have great debugging tools that can really help. A lot of times you can get several views on something to see what is going wrong. Plus there are compilers for JavaScript to help find some common typos and to enforce programming standards you might want to adopt.

Frameworks

We provide a global framework for doing things like popping up Finders. We also have lots of routines to handle our standard data like dates and numbers. We use a number of open source libraries like JQuery and Knockout.js as well as the (partly commercial) Kendo UI library. Generally, if you know our framework, you don’t need to know that much from the other libraries, but as you get more sophisticated, you will want to leverage them as well. There is also nothing stopping you including further libraries, just that we won’t be able to support you in their use.

Server Independent

We don’t use any libraries with matching client and server components. Hence all communications are handled by our JavaScript code, which means we have full control over all communications. This means it’s our responsibility to not be chatty. It also means we write all the code to handle any Ajax calls from the Browser. ASP.Net MVC makes this pretty easy. It also means that if you want to add your own components (say another UI control) then you can do so as long as you follow this same basic approach.

Example Code

The Web UI code generation wizard creates three JavaScript files for each UI. These are:

  1. screennameKoExtn.js – This is an extension to the knockout data binding where you can add some JavaScript generated calculated fields to the model.
  2. screennameRepository.js – This file contains a method for each RPC call that can be made. We start with the basic CRUD operations, but you would add your screen specific ones here.
  3. screennameBehaviour.js – This is all the other JavaScript code. You may want to break this file up for larger UIs. But for smaller UIs this is a handy place to contain all the JavaScript for working with the controls on your form.

The Behaviour.js file contains a large object for the screen with functions to bind controls to methods and to do various processing when that control is activated. For instance, for our sample PJC CostType UI, in the init routine it will call initButtons which will bind all the buttons to code including the Save button:

       // Save Button
       $("#btnSave").bind('click', function () {
              sg.utls.SyncExecute(costTypeUI.saveCostType);
       });

Then the saveCostType routine is:

saveCostType: function () {
     if ($("#frmCostType").valid()) {
         var data = sg.utls.ko.toJS(modelData, costTypeUI.computedProperties);
         if (modelData.UIMode() === sg.utls.OperationMode.SAVE) {
              costTypeRepository.update(data, costTypeUISuccess.update);
         } else {
         costTypeRepository.add(data, costTypeUISuccess.update);
      }
   }
},

The costTypeRepository.update routine is in the Repository.js file and performs the Ajax call to do the save:

       // Update
       update: function(data, callbackMethod) {
              costTypeAjax.call("Save", data, callbackMethod);
       }

This Ajax.call routine will actually transmit the data model to the server where it is picked up by a controller and processed. We talk about the server side of things in a later article. The entire file is a bit on the long side to include in the blog, but hopefully this give a bit of flavour of what it contains.

The dataModel.UIMode is a predefined calculated field in the screennameKoExtn.js file and keeps track of whether the UI is in Add or Save mode.

Summary

This was a quick glimpse of the JavaScript that is always running behind the scenes in our new Sage 300 Web UIs. Once we cover the server side, we’ll have to consider a lot of cases that go from the HTML to the JavaScript to the C# to the Sage 300 Business Logic to the SQL database and then all the way back. This is just the second step on this journey after last week’s article.

 

Written by smist08

December 12, 2015 at 8:22 pm

Unstructured Time at Sage

with 4 comments

Introduction

Unstructured time is becoming a common way to stimulate innovation and creativity in organizations. Basically you give employees a number of hours each week to work on any project they like. They do need to make a proposal and at the end give a demo of working software. The idea is to work on projects that developers feel are important and are passionate about, but perhaps the business in general doesn’t think is worthwhile, too risky or has as a very low priority. Companies like Google and Intuit have been very successful at implementing this and getting quite good results.

dilbert-google-20time

Unstructured Time at Sage

The Sage Construction and Real Estate (CRE) development team at Sage has been using unstructured time for a while now. They have had quite a lot of participation and it has led to products like a time and expense iPhone application. Now we are rolling out unstructured time to other Sage R&D centers including ours, here in Richmond, BC.

At this point we are starting out slowly with 4 hours of unstructured time a sprint (every two weeks). Anyone using this needs to submit a project proposal and then do a demo of working code when they judge it’s advanced enough. The proposals can be pretty much anything vaguely related to business applications.

The goal is for people to work on things they are passionate about. To get a chance to play with new bleeding edge technologies before anyone else. To develop that function, program or feature that they’ve always thought would be great, but the business has always ignored. I’m really looking forward to what the team will come up with.

so-many-toys-so-little-unstructured-time-new-yorker-cartoon

We are still doing Hackathons, Ideajams and our regular innovation process. This is just another initiative to further drive innovation at Sage.

Crazy Projects at Google

Our unstructured time needs to be used for business applications, but I wonder what unstructured time is like at Google where they seem to come up with things that have nothing to do with search or advertising. Is it Google’s unstructured time that leads to self-driving cars, Google Glasses, military robots, human brain simulations or any of their many green projects. Hopefully these get turned into good things and aren’t just Google trying to create SkyNet for real. Maybe we’ll let our unstructured time go crazy as well?

Anathem

I’m a big fan of Neal Stephenson, and recently read his novel Anathem. Neal’s novels can be a bit off-putting since they are typically 1000 pages long, but I really enjoy them. One of the themes in Anathem are monasteries occupied by mathematicians that are divided up into groups by how often they report their results to the outside world. The lower order reports every year, next is a group that reports every ten years, then a group that reports every 100 years and finally the highest group that only reports every 1000 years. These groups don’t interact with anyone outside their order except for the week when they report and exchange information/literature with the outside world. This is in contrast to how we operate today where we are driven by “internet time” and have to produce results quickly and ignore anything that can’t be done quickly.

So imagine you could go away for a year to work on a project, or go away for ten years to work on something. Perhaps going away for 100 years or 1000 years might pose some other problems that the monks in the novel had to solve. The point being is to imagine what you could accomplish if you had that long? Would you use different research approaches and methods than we use typically today? Certainly an intriguing prospect contrasting where we currently need to produce something every few months.

My Project

So why am I talking about Anathem and unstructured time together? Well one problem we have is how do you get started on big projects with lots of risk? Suppose you know we need to do something, but doing it is hard and time consuming? Every journey has to start with the first step, but sometimes making that first step can be quite difficult. I’ve had the luxury of being able to do unstructured time for some time, because I’m a software architect and not embedded in an agile sprint team. So I see technologies that we need to adopt but they are large and won’t be on Product Manager’s road maps.

So I’ve done simple POC’s in the past like producing a mobile app using Argos. But more recently I embarked on producing a 64-Bit version of Sage 300. This worked out quite well and wasn’t too hard to get going. But then I got ambitious and decided to add Unicode into the mix. This is proving more difficult, but is progressing. The difficulty with these projects is that they involve changing a large amount of the existing code base and estimating how much work they are is very difficult. As I get a Unicode G/L going, it becomes easier to estimate, but I couldn’t have taken the first step on the project without using unstructured time.

Part of the problem is that we expect our Agile teams to accurately estimate their work and then rate them on how well they do this (that they are accountable for their estimates). This has the side effect that they are then very resistant to work on things that are open ended or hard to estimate. Generally for innovation to take hold, the performance management system needs a bit of tweaking to encourage innovation and higher risk tasks, rather than only encouraging meeting commitments and making good estimates.

Now unlike Anathem, I’m not going to get 100 years to do this or even 10 years. But 1 year doesn’t seem so bad.

Summary

Now that we are adding unstructured time to our arsenal of innovation initiatives, I have high hopes that we will see all sorts of innovative new products, technologies and services emerge out of the end. Of course we are just starting this process, so it will take a little while for things to get built.