Stephen Smith's Blog

Musings on Machine Learning…

Archive for the ‘Uncategorized’ Category

Introducing the Raspberry Pi Pico W

leave a comment »

Introduction

At the end of June, the Raspberry Pi foundation released a new version of the Raspberry Pi Pico that includes a wireless communications chip. This new Pico is named the Raspberry Pi Pico W and only costs $2 more than the base Pico. Basically, they added an Infineon CYW43439 chip which supports Wifi and Bluetooth, though only Wifi is supported through the SDK currently. Thus the Raspberry Pi Pico W is a true IoT (Internet of Things) chip, not requiring a physical connection to communicate.

Several other vendors have already added Wifi and Bluetooth in their independent RP2040 based boards. We reviewed the SeeedStudio Wio RP2040 here.

Compatibility

The hardware designers at Raspberry worked hard to add this wireless chip without affecting people’s existing applications. This meant they couldn’t use any of the exposed GPIO connectors. They also didn’t want to release a new version of the RP2040 chip, so they had to use a connection that was already exposed. The choice they made to minimize impact on people’s existing projects was to take over the connector that was previously used to control the Pico’s onboard LED. The reasoning being that flashing the onboard LED couldn’t be too important to people’s projects. You can still access the LED, but it is now wired to a pin on the CYW43439 chip and you need to go through the CYW43 device driver included in the Pico’s SDK. To blink the LED you need to initialize the high level driver:

    cyw43_arch_init()

Then you can set the LED high or low with:

    cyw43_arch_gpio_put(CYW43_WL_GPIO_LED_PIN, led_state);

To make room for the new chip, a few things on the board have been moved around, notably the debug pins are now in the middle of the board rather than at the edge. When wiring the Pico W up, make sure you use the “Getting Started” guide for the Pico W, which contains the correct diagrams.

Programming the Pico W

The Pico W was added as the pico_w board type in the SDK. By default the RP2040 SDK will build for a regular Pico, so if you want wireless functionality you need to add “-DPICO_BOARD=pico_w” to your cmake command:

    cmake -DPICO_BOARD=pico_w -DCMAKE_BUILD_TYPE=Debug ..

Then pico_w.h from the boards/include/boards folder will be used and you have access to all the wireless features.

The documentation in the SDK is still a bit thin on the new features, but the SDK examples are a great source on how to do something, as working code is better than a dry API reference.

Wireless Interface

The Infineon CYW43439 uses an SPI interface to communicate with the RP2040. The RP2040 chip contains hardware to handle SPI communications; however, the Pico W cannot use these since they are connected to GPIO pins exposed externally. Raspberry didn’t want to reduce the number of GPIO pins, so instead they chose to use the programmable I/O processors (PIO)  to handle the communications. I imagine this will be a problem for anyone already using PIO in their projects as the program memory for PIO is only 32 instructions. There seems to be a #define in the SDK to use PIO for this, but I don’t see any support for not using this if you turn it off.

The CYW43 chip supports Bluetooth, but that support isn’t in the Pico’s SDK yet. There are already lots of examples of using various internet protocols to perform tasks like transmitting weather data to a web server to display on web pages. There is support for both C and MicroPython.

The source code for the CYW43 driver and any other aspects are all included with the SDK. Infineon has good documentation for their chip if you want to get into the details.

Pico H

Raspberry also officially released the Pico H and Pico WH which are just a Pico and Pico W with pre-soldered headers. If you are using a breadboard and want to save some soldering, I would recommend getting these versions.

Summary

It’s great to see Raspberry producing a version of the Pico with built-in wireless capability for true IoT applications. At this point the SDK is still being filled out, but there is plenty there to get you started. Too bad they couldn’t use the RP2040’s SPI hardware and instead use PIO for this, I enjoy using PIO and would rather it was left all for me. I predict the Raspberry Pi Pico WH will become the most popular Pico model.

Written by smist08

August 26, 2022 at 11:29 am

Using Gaia for Search & Rescue

with 2 comments

Introduction

Gaia is a navigation program for Apple and Android smartphones. It mimics the functionality of standalone GPS units, but then adds functionality utilizing the extra capabilities most phones include these days. There are lots of articles and youtube videos on how to use Gaia for trip planning and following trails and I won’t cover that here. This article is more about the use cases that Search & Rescue (SAR) teams typically use in a GPS. This article assumes some familiarity with using a standalone GPS and SAR practices, and then very little knowledge of Gaia. I use an Apple iPhone 11, so this article will feature screenshots from the Apple version, but the Android version should be similar.

Installation & Setup

You need to install and set up Gaia first before using. You don’t want to do this part at 3am on a callout. First you need to install Gaia GPS from the Apple App store. Search for “Gaia GPS” and then press the “Get” button to install it.

My screen says Open rather than Get because I already have Gaia installed.

Notice the second item is a subscription for more stuff. The crucial part for SAR is offline access. With the free level, you cannot download maps to use the program when you don’t have a cell data connection. For SAR this is a showstopper. If a SAR group uses Gaia, it probably has a team license and SAR management can provide you with a link that will connect you to the SAR account.

Create Account

After you install Gaia, the first time you run it, you will be prompted to create an account or link to your Facebook account. Use either method to create your Gaia account. This will give you basic access to the program. If your SAR group provided a link to click to connect to their account, you can do that now to connect your Gaia account to the SAR account to get full access. If you don’t have this, then you will need to buy a premium subscription account.

Download Maps for Offline Use

The next thing to do is to download a map of your search region so you can use Gaia offline. The default map is Gaia Topo which is fine for most purposes. If you want to use a different map or set of overlays, you need to select these first before downloading as this process will only process the active map. To download the map, click the ⊕ symbol at the top of the screen.

Select “Download Maps”. This then goes to a map with a resizable rectangle that you can expand to your entire search region. In my case the Sunshine Coast:

You probably also want to do this for any regions you travel to for mutual aid, in my case I’ll download Powell River, Squamish, Whistler, Pemberton and the North Shore. I’m not going to talk about overlays in this article, but if you want these included, they must be selected and active before you choose to download. The file downloaded can be quite large so you need some space on your phone. I would recommend having 1Gig free for maps, which if you have a newer phone, shouldn’t be a big deal. Also perform the download while connected to a good Wifi connection so it is faster and you don’t use up your monthly data cap. If maps overlap, Gaia is reasonably smart and won’t download duplicate data so you don’t need to be careful in how you tile these (ie there is no penalty for overlapping maps).

Configure Units

Next go into “Settings”, the important part is that the units are correct for your purposes.

In SAR, you will probably need to return to this menu repeatedly. Often on a search we need both UTM and lat/long coordinates and have to return here to switch back and forth. If we are working with Marine SAR then we may want to use nautical distance units for that particular search. If we travel to the US for mutual aid, we may even need to use Imperial units.

Power Saving

You should also check the power options. I turn off the option to keep the screen on, as this wastes battery. I need to record tracks, but for the most part my phone lives in my pocket and I’m not looking at the screen.

Security

As you use Gaia, your phone is going to ask security questions, which you generally want to allow. A key one is you want to allow Gaia to always be able to access the GPS, if you choose the option to only access the GPS when using the app, then you are going to have holes in your GPS tracks when you switch to other apps, say to send a message or answer an email.

Basic Usage

Recording a Track

SAR Management always requires a recording of where you traveled so these can all be combined in a master map showing search coverage. The standard for providing these is a GPX file. There is an “Record” button in the upper left of the screen. Remember to hit this before heading out on your search. When recording it changes to a timer showing how long it’s been recording. When you return to command, tap this box again and choose “Finish Track”. It’s a good idea to give this track a meaningful name on the next screen. It will then show you the track. Tap the three dots in the upper right to get a menu:

Select export and then GPX. Next choose eMail and enter the email address of your SAR manager. Much easier than doing this with a standalone GPS.

If you want to include pictures with this file, then you need to save the track and all the pictures in the same folder. To do this create a new folder before starting out and then save everything for the day to this folder. This may or may not be helpful.

Getting my Position

Of course you can get your current position from the phone’s compass app, this is handy to copy to a text and send, but is in lat/long and not very easy to read out over a radio. In SAR we tend to use an abbreviated form of UTM coordinates, so we can give our location over our radio using six digits. You can configure Gaia to show your location by tapping one of the three info boxes at the top and choosing coordinates in one of them.

Then on the radio, I can give my position as 632733. In reality, our radios transmit our location back to command on a regular basis, so I haven’t had to do this on a real callout, but this was frequently requested during training exercises.

Setting a Waypoint

If you hit the ⊕ sign at the top of the screen, you have two ways to create a waypoint.

Scenario one is to enter a marker of where you found a clue or a point you want to make note of for some reason, in this case choose “Add Waypoint (My Location)” and then enter a meaningful title and description.

The other scenario is SAR Management radioing you and asking you to go to some point on the map. The easy way to do this is to hit plus, choose “Add Waypoint”, then the next screen defaults to your current coordinates, that you can edit to the coordinates you’ve been given. Next enter a description, then the waypoint will appear on the map and you can figure out how to get to it.

Once you create a waypoint, if you tap it, you can tap the info icon and open it, from here you can add notes, or even ask Gaia to guide you to the waypoint. You can also edit the waypoint if you need to move it or rename it.

Summary

This was a quick start to Gaia for SAR practitioners. Gaia is a large and sophisticated program, where we only touched on a few aspects of what it can do. The best way to learn a program is to use it and to experiment with it. You don’t want the first time you do something to be during a stressful search in the middle of the night. Gaia isn’t meant to completely replace standalone GPS’s, but a key to SAR success is redundancy so if one piece of equipment fails, you aren’t stuck. Often the cell program is easiest to use, since it has extra functionality like being able to email your track in to command.

Written by smist08

March 5, 2021 at 11:36 am

Posted in Uncategorized

5 Best Word Processors for Writers

leave a comment »

The Write Cup

By Jeff Hortobagyi, Cathalynn Cindy Labonte-Smith, Elizabeth Rains & Stephen Smith

Introduction

The market has remained fairly static for word-processors since MS Word was released as Multi-Tool Word Version 1.0 in 1983, it’s dominated the word-processing market. Its main competitor became WordPerfect, but that program soon fizzled out and became a minor player. Although, there are loyal WordPerfect users out there and there’s a WordPerfect Office Professional 2020 suite available, but at over $500 it’s out-priced itself out of the market. MS Word remains the heavy hitter in the word-processing world and it’s affordable for $6.40/month for the entire MS Office package, but an increasing number of free apps keep driving down its price.

In 2006, Google Docs came along and changed the way people worked. No longer did authors need to print out and make paper copies to make redlines. No longer did they need to attach large…

View original post 4,039 more words

Written by smist08

July 10, 2020 at 9:10 pm

Posted in Uncategorized

Coffee in the Age of Social Distancing

leave a comment »

The Write Cup

By Stephen Smith

Introduction

Here in British Columbia, Canada, COVID-19 restrictions are slowly being relaxed. As they are relaxed, coffee shops are scrambling to re-open while meeting the various government regulations for social distancing and cleaning. In this article I’ll discuss the various setups and trade-offs various shops are taking.

Inside vs Outside Seating

It is far easier for coffee shops to offer outside patio seating than providing inside seating. In both cases social distancing is required and the tables have to be measured to ensure they are sufficiently separated. Many coffee shops don’t have enough room for any inside seating and they have to keep the people in the counter lineup sufficiently separated. Often setting up the counter lineup takes all their inside floor space. Some have an indoor and snaky lineup to the counter, the pickup area and then an exit door.

BEWARE! None of the washrooms are…

View original post 528 more words

Written by smist08

May 29, 2020 at 1:40 pm

Posted in Uncategorized

Tools for the Mobile Writer

leave a comment »

The Write Cup

By Stephen Smith

Introduction

Most writers work from home, or a coffee shop to get a change of scenery. More adventurous writers regularly travel to remote places, to say write while lying on an isolated tropical beach. This article covers a number of tools and techniques to stay connected and to collaborate with other writers, editors or publishers.

Staying on the Internet

All modern tools require that you’re connected to the Internet. I blogged on how to stay safe when using coffee shop wifi, but what if there isn’t any wifi to connect to? The best alternative to wifi is the cellular network.

Most cell phone plans include enough data for writers. If you are travelling in a foreign country, consider getting a local SIM card and plan for your phone. This is far cheaper than paying the roaming charges to your home provider.

A few writers compose and…

View original post 592 more words

Written by smist08

March 7, 2020 at 5:12 pm

Posted in Uncategorized

The Facts About Coffee Shop Wifi

leave a comment »

My guest post on the Write Cup blog.

The Write Cup

By Stephen Smith

Is Coffee Shop Wifi Safe?

Many people use coffee shops as their offices or workspaces, all using the free Wifi to get their work done. Wifi can be a competitive differentiator for a shop. If a joint has good Wifi, writers and home office workers will flock to it. A few shops don’t have Wifi to discourage people staying too long, but then they better have great coffee to compensate. People worry this isn’t safe and you will get hacked. Some buy expensive security solutions to protect themselves. Is using coffee shop Wifi really dangerous? Are you going to get hacked using Starbucks’ Wifi? In this article we’ll look at how safe it is and provide some tips to stay secure.

Remember when Internet cafes charged by the minute? Wifi on cruise ships are prohibitively expensive, so better to wait until you’re in port and find a…

View original post 613 more words

Written by smist08

February 28, 2020 at 11:58 am

Posted in Uncategorized

Open Source Photography Toolkit

leave a comment »

Introduction

Since retiring, I’ve switched to entirely running open source software. For photography, Adobe Photoshop and Lightroom dominate the scene. Most articles and books are based on these products. The Adobe products have a reputation for being very good, but they are quite expensive, especially since they have switched to a subscription model of pricing. In this article I’m going to talk about the excellent open source programs that work very well in this space.

Basically there are two streams here, the quicker and easier software equivalent to Adobe Lightroom and then the more technical and sophisticated software equivalent to Adobe Photoshop.

I run all these programs on Ubuntu Linux, however they all have versions for the Mac and Windows.

You can download the source code for any open source program and have a look at how the programs work. If you find a bug, you can report it, or if you are a programmer you can fix it. Figuring out enough of a program to work on it is a large undertaking, but I feel comforted that that avenue is open to me if I need it.

digiKam

digiKam is an open source photo management program similar to Adobe’s Lightroom. It is easier to use than a full photo editing tool like GIMP or Adobe Photoshop, and has tools to automate the processing of the large number of photos taken in a typical shoot. It has the ability to import all the photos from raw format for further processing, it has a pretty good image editor built in and then lots of tools for managing your photos, like putting them in albums, assigning keywords, and editing the meta-data. There is an extensive search tool, so you can find your photos again if you forgot where you put them. There are tools to publish your photos to various photography websites as well as various social media websites.

screenshot from 2019-01-05 11-57-36

Unlike Lightroom, there aren’t nearly as many books or tutorials on the product. I only see one book on Amazon. However the web based manual for digiKam is pretty good and I find it more than enough. It does peter out near the end, but most of the things that are TBD are also easy to figure out (mostly missing the specifics of various integrations with third party web sites).

Another difference is that digiKam does actually edit your pictures and doesn’t just store differences like LR does, so you need to be aware of that in your management workflows.

Lightroom costs $9.99/month and is subscription based. digiKam is free. One benefit is you don’t have to worry about having your photos held hostage if you get tired of paying month after month. Especially if you are an infrequent user.

GIMP

GIMP is very powerful photo-editing software. It is an open source equivalent of Adobe Photoshop. I recently saw a presentation by an author of a book on Photoshop on his workflow for editing photos with Photoshop. I was able to go home and perform the exact same workflows in GIMP without any problems. These involved a lot of use of layers and masks, both of which are well supported in GIMP.

screenshot from 2019-01-05 12-10-31

Both Photoshop and GIMP are criticised for being hard to use, but they are the real power tools for photo editing and are both well worth the learning curve to become proficient. There are actually quite a few good books on GIMP as well as many YouTube tutorials on the basic editing tasks.

For 90% of your needs, you can probably use digiKam or Lightroom. But for the really difficult editing jobs you need a tool like this.

Photoshop typically costs $20/month on a subscription basis. GIMP is free.

RawTherapee

GIMP doesn’t have the ability built in to read raw image files. There are plug-ins hat you can install, but I’ve not gotten good results with these, often they work stand-alone, but not from within GIMP. digiKam can process raw files, and doing that en-mass is one of its main features.

screenshot from 2019-01-05 14-02-19

Sometimes you want a lot of control of the process when you do this processing. This is where RawTherapee comes in. It is a very sophisticated conversion program. It supports batch processing and has very sophisticated color processing.

Often in the open source world, components are broken out separately rather than bundled into one giant program. This provides more flexibility to mix and match software and allows the development teams to concentrate on what they are really good at.

Typically you would take all your pictures in your camera’s raw mode, convert these to a lossless file format like TIFF and then do your photo editing in GIMP. This is the harder, but more powerful route as opposed to using digiKam for the entire workflow.

OpenShot

OpenShot is actually movie editing software. I included it here, because many photographers like to create slideshows of their work, where the images have nice transitions and change from image to image with the music. OpenShot is an ideal open source program for doing this. If you have a Mac, then you can use iMovie for this, but if you don’t have a Mac or what something that works on any computer then OpenShot is a good choice.

screenshot from 2019-01-05 14-08-30

Summary

There are good open source pieces of software that are very competitive with the expense commercial software products. Adobe has a near monopoly in the commercial space and tries to squeeze every dime it can out of you. It’s nice that there is a complete suite of alternatives. I only use open source software for my photography, and have find it to easily fill all my needs.

This article only talks about four pieces of software. There are actually many more specialized applications out there that you can easily find by googling. Chances are if you look below the ads in your Google search results, you will find some good free open source software that will do the job for you.

 

Written by smist08

January 5, 2019 at 10:29 pm

TensorFlow on the Raspberry Pi and Beyond

with one comment

Introduction

You’ve been able to use TensorFlow on a Raspberry Pi for a while, but you’ve had to build it yourself. With TensorFlow 1.9, Google added native support, so you can just use pip3 to install precompiled binaries and be up and running in no time. Although you can do this, general TensorFlow usage on the Raspberry Pi is slow. In this article I’ll talk about some challenges to running TensorFlow on the Raspberry Pi and look at some useful cases that do work. I’ll also compare some operations against my Intel i3 based laptop and the rather beefy servers available through Google’s subsidiary Kaggle.

Installing TensorFlow on a Pi

I saw the press release about how easy it was to install TensorFlow on a Raspberry Pi, so I read the TensorFlow install page for the Pi, checked the prerequisites, and followed the instructions. All I got was strange unhelpful error messages about how there was no package for my version of Python. The claim on the TensorFlow web page is that Python 3.4 or greater is required and I was running 3.4.2, so all should be good. I installed all the prerequisites and dependencies from the TensorFlow script and those all worked, including TensorBoard. But no luck with TensorFlow itself.

After a bit of research, it appeared that the newest version of Raspbian is Stretch, but I was running Jessie. I had assumed that since my operating system was updating that it would have installed any newer version of Raspbian. That turns out to not be true. The Raspberry people were worried about breaking things, so didn’t provide an automatic upgrade path. Their recommendation is to just install a new image on a new SD card. I could have done that, but I found instructions on the web on how to upgrade from Jessie to Stretch. I followed the instructions available here, and it all worked fine.

To me, this is really annoying since I wasted quite a bit of time on this. I don’t understand why Raspbian didn’t at least ask if I wanted to upgrade to Stretch offering the risks and trade-offs. At any rate now I know, not to trust “sudo apt-get dist-upgrade”, it doesn’t necessarily do what it claims.

After I upgraded to Stretch, doing a “sudo pip3 install TensorFlow” worked quickly and I was up and running.

Giving TensorFlow a Run

To try out TensorFlow on my Raspberry Pi, I just copied the first TensorFlow tutorial into IDLE (the Python IDE) and gave it a run.

import tensorflow as tf
mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(512, activation=tf.nn.relu),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)

This tutorial example trains on the MNINST dataset which is a set of handwritten digits and then evaluates the test set to see how accurate the model is. This little sample typically achieves 98% accuracy in identifying the digits. The dataset has 60,000 images for training and then 10,000 for testing.

I set this running on the Raspberry Pi and it was still running hours later when I went to bed. My laptop ran this same tutorial in just a few minutes. The first time you run the program, it downloads the test data, on the Pi this was very slow. After that it seems to be cached locally.

Benchmarking

To compare performance, I’ll look at a few different factors. The tutorial program really has three parts:

  1. Downloading the training and test data into memory (from the local cache)
  2. Training the model
  3. Evaluating the test data

Then I’ll compare the Raspberry Pi to my laptop and the Kaggle virtual environment, both with and without GPU acceleration.

 

Load Time Fit Time Eval Time
Raspberry Pi 3.6 630 4.7
I3 Laptop 0.6 95 0.5
Kaggle w/o GPU 1.7 68 0.6
Kaggle with GPU 1.1 44 0.6

 

Keep in mind that my Raspberry Pi is only a 3 and not the newer slightly faster 3 B+. The GPU in the Kaggle environment is the NVIDIA Tesla K80. The server is fairly beefy with 16GB of RAM. The Kaggle environment is virtual and shared, so performance does vary depending on how much is going on from other users.

Results

As you can see the Raspberry Pi is very slow fitting a model. The MNINST data is fairly compact as these things go and represents a relatively small data set. If you want to fit a model and only have a Raspberry Pi, I would recommend doing it in a Kaggle environment from an Internet browser. After all it is free.

I think the big problem is that the Raspberry Pi only has 1Gig of RAM and will be swapping to the SD Card which isn’t the greatest in performance. My laptop has 4Gig RAM and a good SSD Hard Drive. I suspect these are more key than comparing the Intel i3 to the ARM Cortex processor.

So why would you want TensorFlow on the Raspberry Pi then? The usage would be to run pre-trained models for specific applications. For instance perhaps you would want to make a smart door camera. The camera could be hooked up to a Raspberry Pi and then a TensorFlow image recognition model could be run to determine if someone approaching the door should be admitted, and if so, send a signal from a GPIO pin to unlock the door.

From above you might think that evaluation is still too slow on a Raspberry Pi. However, x_test which we are evaluating actually contains 10,000 test images. So performing 10,000 image evaluations in under 5 seconds is actually pretty decent.

A good procedure would be to train the models on a more powerful computer or in the cloud, then run the model on the Pi to create some sort of smart device utilizing the Pi’s great I/O capabilities.

Summary

The Raspberry Pi with its great DIY interface abilities combined with its ability to run advanced machine learning AI applications provides a great platform to develop smart devices. I look forward to seeing all sorts of new smart projects appearing on the various Raspberry Pi project boards.

Written by smist08

August 17, 2018 at 12:09 am

Posted in Uncategorized

Sage Connect 2016

with 4 comments

Introduction

The Sage Connect 2016 conference has just wrapped up in Sydney, Australia. I was very happy to be able to head over there and give a one-day training class on our new Web UIs SDK, and then give a few sessions in the main conferences. This year the conference combined all the Sage Australia/New Zealand/Pacific Islands products into one show. So there were customers and partners from Sage HandiSoft, Sage MicrOpay, Sage One as well as the usual people from Sage CRM, Sage 300, Sage CRE and Sage X3.

The show was on for two days where the first day was for customers and partners and then the second day was for partners only. As a result, the first day had around 600 people in attendance. There was a networking event for everyone at the end of the first day and then a gala awards dinner for the partners after the second day.

A notable part of the keynote was the kick-off of the Sage Foundation in Australia with a sponsorship of Orange Sky Laundry. Certainly a worthwhile cause that is doing a lot of good work helping Australia’s homeless population.

There was a leadership forum featuring three prominent Australian entrepreneurs discussing their careers and providing advice based on their experience. These were Naomi Simpson of Red Balloon, Brad Smith of Braaap Motorcycles and Steve Vamos of Telstra. I found Brad Smith especially interesting as he created a motorcycle manufacturer from scratch.

The event was held at the conference center at the Australian Technology Park. This was very interesting since it was converted from the Eveleigh Railway Workshops and still contains many exhibits and equipment from that era. It created an interesting contrast of 2016 era high tech to the heavy industry that was high tech around 1900.

Sage 300

The big news for Sage 300 was the continued roll out of our Web UIs. With the Sage 300 2016.1 release just being rolled out this adds the I/C, O/E and P/O screens along with quite a few other screens and quite a few other enhancements. Jaqueline Li, the Product Manager for Sage 300 was also at the show and presented the roadmap for what customers and partners can expect in the next release as well.

Sage is big on promoting the golden triangle of Accounting, Payments and Payroll. In Australia this is represented by Sage 300, Sage Payment Solutions and Sage MicrOPay which all integrate to complete the triangle for the customers. Sage Payment Solutions (SPS) is the same one as in North American and now operates in the USA, Canada and Australia.

Don Thomson one of the original founders of Accpac and the developer of the Access-C compiler was present representing his current venture TaiRox. Here he is being interviewed by Mike Lorge, the Managing Director Sage Business Solutions, on the direction of Sage 300 during one of the keynote sessions.

donthom

Development Partners

Sage 300 has a large community of ISVs that provide specialized vertical Accounting modules, reporting tools, utilities and customized solutions. These solutions have been instrumental in making Sage 300 a successful product and a successful platform for business applications. Without these company’s relentless passionate support, Sage 300 wouldn’t have anywhere near the market share it has today.

There were quite a few exhibiting at the Connect conference as well as providing pre-conference training and conference sessions. Some of the participants were: Altec, Accu-Dart, AutoSimply, BSP Software, Dingosoft, Enabling, Greytrix, HighJump, InfoCentral, Orchid, Pacific Technologies, Symphony, TaiRox and Technisoft.

exhibis

I gave a pre-conference SDK training class on our new Web UIs, so hopefully we will be seeing some Web versions of these products shortly.

Summary

It’s a long flight from Vancouver to Sydney, but at least it’s a direct flight. The time zone difference is 19 hours ahead, so you feel it as 5 hours back which isn’t too bad. Going from Canadian winter to Australian summer is always enjoyable to get some sunshine and feel the warmth. Sydney was hopping with tourist season in full swing, multiple cruise ships docked in the harbor, Chinese new year celebrations in full swing and all sorts of other events going on.

The conference went really well, and was exciting and energizing. Hopefully everyone learned something and became more excited about what we have today and what is coming down the road.

Of course you can’t visit Australia without going to the beach, so here is one last photo, in this case of Bondi Beach. Surf’s up!

bondi

Written by smist08

February 25, 2016 at 2:46 am

Adding Your Application to the Home Page

with 6 comments

Introduction

We’ve been talking about how to develop Web UIs using our beta SDK. But so far we’ve only been running these in Visual Studio, we haven’t talked about how to deploy them in a production system. With this article we’ll discuss how to add your menus to the home page. Which files need to be installed where and a few configuration notes to observe.

We’ll continue playing with Project and Job Costing, so first we’ll add PJC on to the end of the More menu in the Home Page:

pjc1

As part of this we’ll build and deploy the PJC application so that we can run its UIs in a deployed environment, rather than just running the screens individually in Visual Studio like we have been.

pjc2

The Code Generation Wizard

When you create your solution, you get a starting skeleton Sage.PM.BusinessRepository/Menu/PMMenuModuleHelper.cs. I’m using PM since I’m playing at creating PJC Web UIs, but instead of PM you will get whatever your application’s two letter prefix is. If you don’t have such a prefix, remember to register one with Sage to avoid conflicts with other Sage 300 SDK developers. Similarly, I use Sage as my company, but in reality this will be whatever company name you specified when you created the solution. This MenuModuleHelper.cs file specifies the name of the XML file that specifies your application’s Sage 300 Home Page menu structure. This C# source file is also where you put code to dynamically hide and show your program menu items, so for instance if you have some multi-currency only UIs then this is where you would put the code to hide them in the case of a single currency database (or application).

The solution wizard creates a starting PMMenuDetails.xml file in the root of the Sage.Web project. Then each time you run the code generation wizard it will add another item for the UI just generated. This will produce rather a flat structure so you need to polish this a bit as well as fix up the strings in the generated MenuResx.resx file in the Sage.PM.Resources project. This resource file contains all the strings that are displayed in the menu. Further you can optionally update all the generated files for the other supported languages.

One caveat on the MenuDetails.xml file is that you must give a security resource that the user has rights to or nothing will display. Leaving this out or putting N/A won’t work. One good comparison is that since these are XML files you can see all of Sage’s MenuDetails.xml files by looking in the Sage 300\Online\Web\App_Data\MenuDetail folder. Note that the way the customize screen works, it removes items and puts them in a company folder under these. It will regenerate them if the file changes, but if you have troubles you might try clearing these folders to force them to be regenerated.

Below is a sample XML element for a single UI to give a bit of flavor of what the MenuDetails.xml file contains.

 

  <item>
    <MenuID>PM4001</MenuID>
    <MenuName>PM4001</MenuName>
    <ResourceKey>PMCostType</ResourceKey>
    <ParentMenuID>PM2000</ParentMenuID>
    <IsGroupHeader>false</IsGroupHeader>
    <ScreenURL>PM/CostType</ScreenURL>
    <MenuItemLevel>4</MenuItemLevel>
    <MenuItemOrder>2</MenuItemOrder>
    <ColOrder>1</ColOrder>
    <SecurityResourceKey>PMCostType</SecurityResourceKey>
    <IsReport>false</IsReport>
    <IsActive>true</IsActive>
    <IsGroupEnd>false</IsGroupEnd>
    <IsWidget>false</IsWidget>
    <Isintelligence>false</Isintelligence>
    <ModuleName>PM</ModuleName>
  </item>

Post Build Utility

Now that we have our menu defined and our application screens running individually in debug mode inside Visual Studio, how do we deploy it to run inside IIS as a part of the Sage 300 system? Which DLLs need to be copied, which configuration files need to be copied and where do they all go? To try these steps, make sure you have the latest version of the Sage 300 SDK Wizards and the matching newest beta build.

The Wizard adds a post build event to the Web project that will deploy all the right files to the local Sage 300 running in IIS. The MergeISVProject.exe utility can also be run standalone outside of VS, its a handy mechanism to copy your files. Its usually a good idea to restart IIS before testing this way to ensure all the new files are loaded.

postbuild1

This utility basically copies the following files to places under the Sage300\online\web folder:

  • xml is the configuration file which defines your application to Sage 300. Think of this as like roto.dat for the Web. It defines which are your DLLs to load using unity dependency injection when we start up.
  • App_Data\MenuDetail\PMMenuDetails.xml is your menu definition that we talked about earlier.
  • Areas\PM\*.* area all your Razor Views and their accompanying JavaScript. Basically anything that needs to go to the Browser.
  • Bin\Sage.PM.*.dll and Bin\Sage.Web.DLL are the DLLs that you need to run. (Keep in mind that I’m using Sage for my company name, you will get whatever your company is instead of Sage in these).

With these in place your application is ready to roll.

Update 2016/01/20: This tool was updated to support compiled razor view and the command line is now:

Call "$(ProjectDir)MergeISVProject.exe" "$(SolutionPath)"  "$(ProjectDir)\"
     {ModuleName}MenuDetails.xml $(ConfigurationName) $(FrameworkDir)

Plus it is only run when the solution (not an individual project) is set for a “Release” build.

Compiled Views

When we ship Sage 300, all our Razor Views are pre-compiled. This means the screens start much faster. If you don’t compile them, the when first accessed, IIS needs to invoke the C# compiler to compile them and we’ve found that this process can take ten seconds or so. Plus, the results are cached and the cache is reset every few days causing this to have to happen over again. Another plus is that when pre-compiled the Views can’t easily be edited, which means people can’t arbitrarily change these causing a problem for future upgrades.

Strangely Visual Studio doesn’t have a dialog where you can set whether you want your Views pre-compiled, you have to edit the Sage.Web.csproj file directly. And you need to change the XML value:

<MvcBuildViews>false</MvcBuildViews>

Between true and false yourself in a text editor.

The Sage 300 system is set so that it only runs compiled Razor Views. If you do want to run un-compiled Razor Views, then you need to edit Sage300\online\web\precompiledapp.config and change updatable from false to true.

Beta note: As I write this the MergeISVProject utility doesn’t copy compiled Views properly. This will be fixed shortly, but in the meantime if you want to test with compiled Views you need to copy these over by hand.

New Beta note: This tool now fully supports compiled razor views.

Beta note: The previous beta wouldn’t successfully compile if you were set to use compiled Views, this has been fixed and the solution wizard now includes all the references necessary to do this.

Summary

This article was meant to give a quick overview of the steps to take once you have a screen working in Visual Studio debug mode, to being able to test it running in the Sage 300 Home Page as part of a proper deployment. Generally, the tools help a lot and hopefully you don’t find this process too difficult.

 

Written by smist08

December 29, 2015 at 4:26 pm