Stephen Smith's Blog

Musings on Machine Learning…

Kali Linux on the Raspberry Pi

leave a comment »

Introduction

Raspbian is the main operating system for the Raspberry Pi, but there are quite a few alternatives. Raspbian is based on Debian Linux and there has been a good uptake on the Raspberry Pi which means that most Linux applications have ARM compiled packages available through the Debian APT package manager. As a consequence it’s quite easy to create a Raspberry Pi Linux distribution, so there are quite a few of them. Kali Linux is a specialist distribution that is oriented to hackers (both black and white hat). It comes with a large number of hacking tools for gaining access to networks, compromising computers, spying on communications and other fun things. One cool thing is that Kali Linux has a stripped down version for the Raspberry Pi that is oriented towards a number of specialized purposes. However with the apt-get package manager you can add pretty well anything that has been left out.

If you watch the TV show Mr. Robot (highly recommended) then you might notice that all the cool in-the-know people are running Kali Linux. If you want to get a taste of what it’s all about and you have a Raspberry Pi then all you need is a free micro-SD card to give it a try.

Are You a Black or White Hat?

If you are a black hat hacker looking to infiltrate or damage another computer system, then you probably aren’t reading this blog. Instead you are somewhere on the darknet reading much more malicious articles than this one. This article is oriented more to white hat hackers or to system administrators looking to secure their computing resources. The reason it’s important for system administrators to know about this stuff is that they need to know what they are really protecting against. Hackers are very clever and are always coming up with new hacks and techniques. So it’s important for the good guys to know a bit about how hackers think and to have defenses and protections against the imaginative attacks that might come their way. This now includes the things the bad guys might try to do with a Raspberry Pi.

Anyone that is responsible for securing a network or computer has to test their security and certainly one easy way to get started is to hit it with all the exploit tools included with Kali Linux.

Why Kali on the Pi?

A lot of hacking tasks like cracking WiFi passwords take a lot of processing power. Cracking WPA2 passwords is usually done on very powerful computers with multiple GPU cards and very large dictionary databases. Accomplishing this on a Raspberry Pi would pretty much take forever if it could actually do it. Many hacking tasks are of this nature, very compute intensive.

The Raspberry Pi is useful due to its low cost and small size. The low cost makes it disposable, if you lose it then it doesn’t matter so much and the small size means you can hide it easily. So for instance one use would be to hide a Raspberry Pi at the site you are trying to hack. Then the Raspberry Pi can monitor the Wifi traffic looking for useful data packets that can give away information. Or even leave the Pi somewhere hidden connected to a hardwired ethernet connection. Then Kali Linux has tools to get this information to external sources in a secretive way and allows you to remotely control it to direct various attacks.

Many companies build their security like eggs with a hard to penetrate shell and often locating a device on their premises can bypass their main security protections. You can then run repeated metasploit attacks looking for weaknesses from the inside. Remember your security should be more like an onion with multiple nested layers, so getting through one doesn’t give an attacker access to everything.

Installing Kali Linux

The Kali Linux web site includes a complete disk image of the Raspberry Pi version. You just need to burn this to a micro-SD card using a tool like ApplePi Baker. They you just put the micro-SD in your Raspberry Pi, turn it on and off you go. However there are a few necessary steps to take before you really start:

  1. The root password is toor, so change this first time you boot up.
  2. The Kali Linux instructions point out you need to refresh your SSH certificates since otherwise you get the ones included with the image. The download page has instructions on how to do this.
  3. The image is configured for 8Gig so if you have a larger SD card then you need to repartition it to get all the free space. I used the GParted program for this which I got via “apt-get install gparted”. Note that to use apt-get you need to connect to WiFi or the Internet. Another option is to get Raspbian’s configuration program and use that, it works with most variants of Debian Linux and allows you to do some other things like setup overclocking. You can Google for the wget command to get this.
  4. Update the various installed programs via “apt-get update” and “apt-get upgrade”. (If you aren’t still logged on as root you need to sudo these).

Now you are pretty much ready to play. Notice that you are in a nice graphical environment and that the application menu is full of hacking tools. These aren’t as many hacking tools as the full Kali distribution, but these are all ones that work well on the Raspberry Pi. They als limited the number so you can run off a really cheap 8 Gig micro-SD card.

I see people say you should stick to command line versions of the tools on the Pi due to its processing power and limited memory, but I found I could add the GUI versions and these worked fine. For instance the nmap tool is installed, but the zenmap graphical front end isn’t. Adding zenmap is easy via “apt-get install zenmap” and then this seems to work fine. I think the assumption is that most people will use the Raspberry version of Kali headless (meaning no screen or keyboard) so it needs to be accessed via remote control software like secure shell which means you want to avoid GUIs.

Summary

Installing Kali Linux on a micro-SD card for your Raspberry Pi si a great way to learn about the various tools that hackers can easily use to try and penetrate, spy on or interfere with people’s computers. There are quite a few books on this as well as many great resources on the Web. Kali’s website is a great starting point. Anyway I find it quite eye opening the variety of readily tools and how easy it is for anyone to use them.

Advertisements

Written by smist08

January 16, 2018 at 3:01 am

Flashing LEDs in Assembler

leave a comment »

Introduction

Previously I wrote an article on an introduction to Assembler programming on the Raspberry Pi. This was quite a long article without much of a coding example, so I wanted to produce an Assembler  language version of the little program I did in Python, Scratch, Fortran and C to flash three LEDs attached to the Raspberry Pi’s GPIO port on a breadboard. So in this article I’ll introduce that program.

This program is fairly minimal. It doesn’t do any error checking, but it does work. I don’t use any external libraries, and only make calls to Linux (Raspbian) via software interrupts (SVC 0). I implemented a minimal GPIO library using Assembler Macros along with the necessary file I/O and sleep Linux system calls. There probably aren’t enough comments in the code, but at this point it is fairly small and the macros help to modularize and explain things.

Main Program

Here is the main program, that probably doesn’t look structurally that different than the C code, since the macro names roughly match up to those in the GPIO library the C function called. The main bit of Assembler code here is to do the loop through flashing the lights 10 times. This is pretty straight forward, just load 10 into register r6 and then decrement it until it hits zero.

 

@
@ Assembler program to flash three LEDs connected to the
@ Raspberry Pi GPIO port.
@
@ r6 - loop variable to flash lights 10 times
@

.include "gpiomacros.s"

.global _start             @ Provide program starting address to linker

_start: GPIOExport  pin17
        GPIOExport  pin27
        GPIOExport  pin22

        nanoSleep

        GPIODirectionOut pin17
        GPIODirectionOut pin27
        GPIODirectionOut pin22

        @ setup a loop counter for 10 iterations
        mov         r6, #10

loop:   GPIOWrite   pin17, high
        nanoSleep
        GPIOWrite   pin17, low
        GPIOWrite   pin27, high
        nanoSleep
        GPIOWrite   pin27, low
        GPIOWrite   pin22, high
        nanoSleep
        GPIOWrite   pin22, low

        @decrement loop counter and see if we loop
        subs    r6, #1      @ Subtract 1 from loop register setting status register
        bne     loop        @ If we haven't counted down to 0 then loop

_end:   mov     R0, #0      @ Use 0 return code
        lsl     R0, #2      @ Shift R0 left by 2 bits (ie multiply by 4)
        mov     R7, #1      @ Service command code 1 terminates this program
        svc     0           @ Linus command to terminate program

pin17:      .asciz  "17"
pin27:      .asciz  "27"
pin22:      .asciz  "22"
low:        .asciz  "0"
high:       .asciz  "1"

 

GPIO and Linux Macros

Now the real guts of the program are in the Assembler macros. Again it isn’t too bad. We use the Linux service calls to open, write, flush and close the GPIO device files in /sys/class/gpio. Similarly nanosleep is also a Linux service call for a high resolution timer. Note that ARM doesn’t have memory to memory or operations on memory type instructions, so to do anything we need to load it into a register, process it and write it back out. Hence to copy the pin number to the file name we load the two pin characters and store them to the file name memory area. Hard coding the offset for this as 20 isn’t great, we could have used a .equ directive, or better yet implemented a string scan, but for quick and dirty this is fine. Similarly we only implemented the parameters we really needed and ignored anything else. We’ll leave it as an exercise to the reader to flush these out more. Note that when we copy the first byte of the pin number, we include a #1 on the end of the ldrb and strb instructions, this will do a post increment by one on the index register that holds the memory location. This means the ARM is really very efficient in accessing arrays (even without using Neon) we combine the array read/write with the index increment all in one instruction.

If you are wondering how you find the Linux service calls, you look in /usr/include/arm-linux-gnueabihf/asm/unistd.h. This C include file has all the function numbers for the Linux system calls. Then you Google the call for its parameters and they go in order in registers r0, r1, …, r6, with the return code coming back in r0.

 

@ Various macros to access the GPIO pins
@ on the Raspberry Pi.

@ R5 is used for the file descriptor

.macro  openFile    fileName
        ldr         r0, =\fileName
        mov         r1, #01     @ O_WRONLY
        mov r7,     #5          @ 5 is system call number for open
        svc         0
.endm

.macro  writeFile   buffer, length
        mov         r0, r5      @ file descriptor
        ldr         r1, =\buffer
        mov         r2, #\length
        mov         r7, #4 @ 4 is write
        svc         0
.endm

.macro  flushClose
@fsync syscall
        mov         r0, r5
        mov         r7, #118    @ 118 is flush
        svc         0

@close syscall
        mov         r0, r5
        mov         r7, #6      @ 6 is close
        svc         0
.endm

@ Macro nanoSleep to sleep .1 second
@ Calls Linux nanosleep entry point which is function 162.
@ Pass a reference to a timespec in both r0 and r1
@ First is input time to sleep in seconds and nanoseconds.
@ Second is time left to sleep if interrupted (which we ignore)

.macro  nanoSleep
        ldr         r0, =timespecsec
        ldr         r1, =timespecsec
        mov         r7, #162    @ 162 is nanosleep
        svc         0
.endm

.macro  GPIOExport  pin
        openFile    gpioexp
        mov         r5, r0      @ save the file descriptor
        writeFile   \pin, 2
        flushClose
.endm

.macro  GPIODirectionOut   pin
        @ copy pin into filename pattern
        ldr         r1, =\pin
        ldr         r2, =gpiopinfile
        add         r2, #20
        ldrb        r3, [r1], #1 @ load pin and post increment
        strb        r3, [r2], #1 @ store to filename and post increment
        ldrb        r3, [r1]
        strb        r3, [r2]
        openFile    gpiopinfile
        writeFile   outstr, 3
        flushClose
.endm

.macro  GPIOWrite   pin, value
        @ copy pin into filename pattern
        ldr         r1, =\pin
        ldr         r2, =gpiovaluefile
        add         r2, #20
        ldrb        r3, [r1], #1    @ load pin and post increment
        strb        r3, [r2], #1    @ store to filename and post increment
        ldrb        r3, [r1]
        strb        r3, [r2]
        openFile    gpiovaluefile
        writeFile   \value, 1
        flushClose
.endm

.data
timespecsec:   .word   0
timespecnano:  .word   100000000
gpioexp:    .asciz  "/sys/class/gpio/export"
gpiopinfile: .asciz "/sys/class/gpio/gpioxx/direction"
gpiovaluefile: .asciz "/sys/class/gpio/gpioxx/value"
outstr:     .asciz  "out"
            .align  2          @ save users of this file having to do this.
.text

Makefile

Here is a simple makefile for the project if you name the files as indicated. Again note that WordPress and Google Docs may mess up white space and quote characters so these might need to be fixed if you copy/paste.

model: model.o
    ld -o model model.o

model.o: model.s gpiomacros.s
    as -ggdb3 -o model.o model.s

clean:
    rm model model.o

 

IDE or Not to IDE

People often do Assembler language development in an IDE like Code::Blocks. Code::Blocks doesn’t support Assembler language projects, but you can add Assembler language files to C projects. This is a pretty common way to do development since you want to do more programming in a higher level language like C. This way you also get full use of the C runtime. I didn’t do this, I just used a text editor, make and gdb (command line). This way the above program has no extra overhead the executable is quite small since there is no C runtime or any other library linked to it. The debug version of the executable is only 2904 bytes long and non debug is 2376 bytes. Of course if I really wanted to reduce executable size, I could have used function calls rather than Assembler macros as the macros duplicate the code everywhere they are used.

Summary

Assembler language programming is kind of fun. But I don’t think I would want to do too large a project this way. Hats off to the early personal computer programmers who wrote spreadsheet programs, word processors and games entirely in Assembler. Certainly writing a few Assembler programs gives you a really good understanding of how the underlying computer hardware works and what sort of things your computer can do really efficiently. You could even consider adding compiler optimizations for your processor to GCC, after all compiler code generation has a huge effect on your computer’s performance.

Written by smist08

January 7, 2018 at 7:08 pm

Spectre Attacks on ARM Devices

leave a comment »

Introduction

I predicted that 2018 would be a very bad year for data breaches and security problems, and we have already started the year with the Intel x86 specific Meltdown exploit and the Spectre exploit that works on all sorts of processors and even on some JavaScript systems (like Chrome). Since my last article was on Assembler programming and most of these type exploits are created in Assembler, I thought it might be fun to look at how Spectre works and get a feel for how hackers can retrieve useful data out of what seems like nowhere. Spectre is actually a large new family of exploits so patching them all is going to take quite a bit of time, and like the older buffer overrun exploits, are going to keep reappearing.

I’ve been writing quite a bit about the Raspberry Pi recently, so is the Raspberry Pi affected by Spectre? After all it affects all Android and Apple devices based on ARM processors. The main Raspberry Pi operating system is Raspbian which is variant of Debian Linux optimized for the Pi. A recent criticism of Raspbian is that it is still 32-Bit. It turns out that running the ARM in 32-bit mode eliminates a lot of the Spectre attack scenarios. We’ll discuss why this is in the article. If you are running 64-Bit software on the Pi (like running Android) then you are susceptible. You are also susceptible to the software versions of this attack like those in JavaScript interpreters that support branch prediction (like Chromium).

The Spectre hacks work by exploiting how processor branch prediction works coupled with how data is cached. The exploits use branch prediction to access data it shouldn’t and then use the processor cache to retrieve the data for use. The original article by the security researchers is really quite good and worth a read. Its available here. It has an appendix at the back with C code for Intel processors that is quite interesting.

Branch Prediction

In our last blog post we mentioned that all the data processing assembler instructions were conditionally executed. This was because if you perform a branch instruction then the instruction pipeline needs to be cleared and restarted. This will really stall the processor. The ARM 32-bit solution was good as long as compilers are good at generating code that efficiently utilize these. Remember that most code for ARM processors is compiled using GCC and GCC is a general purpose compiler that works on all sorts of processors and its optimizations tend to be general purpose rather than processor specific. When ARM evaluated adding 64-Bit instructions, they wanted to keep the instructions 32-Bit in length, but they wanted to add a bunch of instructions as well (like integer divide), so they made the decision to eliminate the bits used for conditionally executing instructions and have a bigger opcode instead (and hence lots more instructions). I think they also considered that their conditional instructions weren’t being used as much as they should be and weren’t earning their keep. Plus they now had more transistors to play with so they could do a couple of other things instead. One is that they lengthed the instruction pipeline to be much longer than the current three instructions and the other was to implement branch prediction. Here the processor had a table of 128 branches and the route they took last time through. The processor would then execute the most commonly chosen branch assuming that once the conditional was figured out, it would very rarely need to throw away the work and start over. Generally this larger pipeline with branch prediction lead to much better performance results. So what could go wrong?

Consider the branch statement:

 

if (x < array1_size)
    y = array2[array1[x] * 256];


This looks like a good bit of C code to test if an array is in range before accessing it. If it didn’t do this check then we could get a buffer overrun vulnerability by making x larger than the array size and accessing memory beyond the array. Hackers are very good at exploiting buffer overruns. But sadly (for hackers) programmers are getting better at putting these sort of checks in (or having automated or higher level languages do it for them).

Now consider branch prediction. Suppose we execute this code hundreds of times with legitimate values of x. The processor will see the conditional is usually true and the second line is usually executed. So now branch prediction will see this and when this code is executed it will just start execution of the second line right away and work out the first line in a second execution unit at the same time. But what if we enter a large value of x? Now branch prediction will execute the second line and y will get a piece of memory it shouldn’t. But so what, eventually the conditional in the first line will be evaluated and that value of y will be discarded. Some processors will even zero it out (after all they do security review these things). So how does that help the hacker? The trick turns out to be exploiting processor caching.

Processor Caching

No matter how fast memory companies claim their super fast DDR4 memory is, it really isn’t, at least compared to CPU registers. To get a bit of extra speed out of memory access all CPUs implement some sort of memory cache where recently used parts of main memory are cached in the CPU for faster access. Often CPUs have multiple levels of cache, a super fast one, a fast one and a not quite as fast one. The trick then to getting at the incorrectly calculated value of y above is to somehow figure out how to access it from the cache. No CPU has a read from cache assembler instruction, this would cause havoc and definitely be a security problem. This is really the CPU vulnerability, that the incorrectly calculated buffer overrun y is in the cache. Hackers figured out, not how to read this value but to infer it by timing memory accesses. They could clear the cache (this is generally supported and even if it isn’t you could read lots of zeros). Then time how long it takes to read various bytes. Basically a byte in cache will read much faster than a byte from main memory and this then reveals what the value of y was. Very tricky.

Recap

So to recap, the Spectre exploit works by:

  1. Clear the cache
  2. Execute the target branch code repeatedly with correct values
  3. Execute the target with an incorrect value
  4. Loop through possible values timing the read access to find the one in cache

This can then be put in a loop to read large portions of a programs private memory.

Summary

The Spectre attack is a very serious new technique for hackers to hack into our data. This will be like buffer overruns and there won’t be one quick fix, people are going to be patching systems for a long time on this one. As more hackers understand this attack, there will be all sorts of creative offshoots that will deal further havoc.

Some of the remedies like turning off branch prediction or memory caching will cause huge performance problems. Generally the real fixes need to be in the CPUs. Beyond this, systems like JavaScript interpreters, or even systems like the .Net runtime or Java VMs could have this vulnerability in their optimization systems. These can be fixed in software, but now you require a huge number of systems to be patched and we know from experience that this will take a very long time with all sorts of bad things happening along the way.

The good news for Raspberry Pi Raspbian users, is that the ARM in the older 32-Bit mode isn’t susceptible. It is only susceptible through software uses like JavaScript. Also as hackers develop these techniques going forwards perhaps they can find a combination that works for the Raspberry, so you can never be complacent.

 

Written by smist08

January 5, 2018 at 10:42 pm

Raspberry Pi Assembly Programming

with 2 comments

Introduction

Most University Computer Science programs now do most of their instruction in quite high level languages like Java which don’t require you to need to know much about the underlying computer architecture. I think this tends to create quite large gaps in understanding which can lead to quite bad program design mistakes. Learning to program C puts you closer to the metal since you need to know more about memory and low level storage, but nothing really teaches you how the underlying computer works than doing some Assembler Language Programming, The Raspbian operating system comes with the GNU Assembler pre-installed, so you have everything you need to try Assembler programming right out of the gate. Learning a bit of Assembler teaches you how the Raspberry Pi’s ARM processor works, how a modern RISC architecture processes instructions and how work is divided by the ARM CPU and the various co-processors which are included on the same chip.

A Bit of History

The ARM processor was originally developed to be a low cost processor for the British educational computer, the Acorn. The developers of the ARM felt they had something and negotiated a split into a separate company selling CPU designs to hardware manufacturers. Their first big sale was to Apple to provide a CPU for Apple’s first PDA the Newton. Their first big success was the inclusion of their chip design in Apple’s iPods. Along the way many chip makers like TI which had given up competing on CPUs built single chip computers around the ARM. These ended up being included in about every cell phone including those from Nokia and Apple. Nowadays pretty up every Android phone is also built around ARM designs.

ARM Assembler Instructions

There have been a lot of ARM designs from early simple 16 bit processors up through 32 bit processors to the current 64bit designs. In this article I’m just going to consider the ARM processor in the Raspberry Pi, this processor is a 64 bit processor, but Raspbian is still a 32 bit operating system, so I’ll just be talking about the 32 bit processing used here (and I’ll ignore the 16 bit “thumb” instructions for now).

ARM is a RISC processor which means the idea is that it executes very simple instructions very quickly. The idea is to keep the main processor simple to reduce complexity and power usage. Each instruction is 32 bits long, so the processor doesn’t need to think about how much to increment the program counter for each instruction. Interestingly nearly all the instructions are in the same format, where you can control whether it sets the status bits, can test the status bits on whether to do anything and can have four register parameters (or 2 registers and an immediate constant). One of the registers can also have a shift operation applied. So how is all this packed into on 32 bit instruction? There are 16 registers in the ARM CPU (these include the program counter, link return register and the stack pointer. There is also the status register that can’t be used as a general purpose register. This means it takes 4 bits to specify a register, so specifying 4 registers takes 16 bits out of the instruction.

Below are the formats for the main types of ARM instructions.

Now we break out the data processing instructions in more detail since these comprise quite a large set of instructions and are the ones we use the most.

Although RISC means a small set of simple instructions, we see that by cleverly using every bit in those 32 bits for an instruction that we can pack quite a bit of information.

Since the ARM is a 32-Bit processor meaning among other things that it can address a 32-Bit address space this does lead to some interesting questions:

  1. The processor is 32-Bits but immediate constants are 16-Bits. How do we load arbitrary 32-Bit quantities? Actually the immediate instruction is 12-Bits of data and 4 Bits of a shift amount. So we can load 12 Bits and then shift it into position. If this works great. If not we have to be tricky by loading and adding two quantities.
  2. Since the total instruction is 32-Bits how do we load a 32-Bit memory address? The answer is that we can’t. However we can use the same tricks indicated in number 1. Plus you can use the program counter. You can add an immediate constant to the program counter. The assembler often does this when assembling load instructions. Note that since the ARM is pipelined the program counter tends to be a couple of instructions ahead of the current one, so this has to properly be taken into account.
  3. Why is there the ability to conditionally execute all the data processing instructions? Since the ARM processor is pipelines, doing a branch instruction is quite expensive since the pipeline needs to be discarded and then reloaded at the new execution point. Having individual instructions conditionally execute can save quite a few branch instructions leading to faster execution. (As long as your compiler is good at generating RISC type code or you are hand coding in Assembler).
  4. The ARM processor has a multiply instruction, but no divide instruction? This seems strange and unbalanced. Multiply (and multiply and add) are recent additions to the ARM processor. Divide is still considered too complicated and slow, plus is it really used that much? You can do divisions on either the floating point coprocessor or the Neon SIMD coprocessor.

A Very Small Example

Assembler listings tend to be quite long, so as a minimal set let’s start with a program that just exits when run returning 17 to the command line (you can see this if you type echo $@ after running it). Basically we have an assembler directive to define the entry point as a global. We move the immediate value 17 to register R0. Shift it left by 2 bits (Linux expects this). Move 1 to register R7 which is the Linux command code to exit the program and then call service zero which is the Linux operating system. All calls to Linux use this pattern. Set some parameters in registers and then call “svc 0” to do the call. You can open files, read user input, write to the console, all sorts of things this way.

 

.global _start              @ Provide program starting address to linker
_start: mov     R0, #17     @ Use 17 for test example
        lsl     R0, #2      @ Shift R0 left by 2 bits (ie multiply by 4)
        mov     R7, #1      @ Service command code 1 terminates this program
        svc     0           @ Linux command to terminate program

 

Here is a simple makefile to compile and link the preceding program. If you save the above as model.s and then the makefile as makefile then you can compile the program by typing “make” and run it by typing “./model”. as is the GNU-Assembler and ld is the linker/loader.

 

model: model.o
     ld -o model model.o

model.o: model.s
     as -g -o model.o model.s

clean:
     rm model model.o

 

Note that make is very particular about whitespace. It must be a true “tab” character before the commands to execute. If Google Docs or WordPress changes this, then you will need to change it back. Unfortunately word processors have a bad habit of introducing syntax errors to program listings by changing whitespace, quotes and underlines to typographic characters that compilers (and assemblers) don’t recognize.

Summary

Although these days you can do most things with C or a higher level language, Assembler still has its place in certain applications that require performance or detailed hardware interaction. For instance perhaps tuning the numpy Python library to use the Neon coprocessor for faster operation of vector type operations. I do still think that every programmer should spend some time playing with Assembler language so they understand better how the underlying processor and architecture they use day to day really works. The Raspberry Pi offers and excellent environment to do this with the good GNU Macro Assembler, the modern ARM RISC architecture and the various on-chip co-processors.

Just the Assembler introductory material has gotten fairly long, so we won’t get to an assembler version of our flashing LED program. But perhaps in a future article.

 

Written by smist08

January 4, 2018 at 10:45 pm

C Programming on the Raspberry Pi

with 2 comments

Introduction

I blogged on programming Fortran a few articles ago, this was really a tangent. I installed the Code::Blocks IDE to do some C programming, but when I saw the Fortran support I took a bit of a side trip. In this article I want to get back to C programming on the Raspberry Pi. I’ve been a C programmer for much of my professional career starting at DREA and then later on various flavours of Unix at Epic Data and then doing Windows programming at a number of companies including Computer Associates/Accpac International/Sage. I’ve done a lot of programming in the object oriented extensions to C including C++, Java, C# and Objective-C. But I think at heart I still have a soft spot for C and enjoy the fun things you can do with pointers. I think there are a lot of productivity benefits to languages like Java and C# which take away the most dangerous features of C (like memory pointers and memory allocation/deallocation), I still find C to be very efficient and often a quick way to get things done. Admittedly I do all my programming these days in Python, but sometimes Python is frustratingly slow and then I miss C.

In this article we’ll re-implement in C our flashing LED program that I introduced here. We’ve now run the same program in Python, Scratch, Fortran and C. The Raspberry Pi is a great learning environment. If you want to learn programming on very inexpensive equipement, the Raspberry Pi is really excellent.

GCC and C

The Gnu Compiler Collection (GCC) is the main C compiler for Linux development and runs on many other platforms. GCC supports many programming languages now, but C is its original and main language. The GCC C Compiler implements the 2011 C language standard C11 along with a large collection of extensions. You can define compiler flags to enforce strict compliance to the standards to improve portability, but I tend to rely on GCC portability instead. A number of the extensions are very handy like being able to define the loop variable inside a for statement which is a feature from C++.

Code::Blocks

Although you don’t need an IDE to do C development, you can just edit the source files in any text editor then use GNU Make to do the build. Then run the GNU Debugger separately to do any debugging. But generally it’s a bit easier to use an IDE especially for debugging. Code::Blocks is a popular IDE that runs on the Raspberry Pi (and many other things), it’s fairly light weight (certainly compared to Eclipse, XCode or Visual Studio) and has all the standard IDE features programmers expect.

A number of people recommend programming on a more powerful computer (like a good Mac or Windows laptop) and then just transferring the resulting executable to the Pi to run and test. For big projects this might have some productivity benefits, but I find the Pi is up to the task and can quite happily develop directly on the Raspberry Pi.

Accessing the GPIO

From C you have quite a few alternatives in how to access the GPIO. You can open /dev/mem and get a direct memory mapping to the hardware registers. This requires running as root, but it is more direct. I’m going to go the same route as I did for the Fortran program and use the easier file access from the Raspbian GPIO device driver. This one works pretty well and doesn’t require root access to use. A good web page with all the ways to access the GPIO from various languages including C is here. If you want to see how to access GPIO hardware registers directly, go for it. Like the Fortran program I needed a delay after opening the main devices file and before accessing the separate file created for the GPIO pin. There seems to be a bit of a race condition here that needs to be avoided. Otherwise a fairly simple C program and accessing the GPIO is pretty standard as accessing Linux devices go.

C Code

Here is the C code for my little flashing lights program. Three source files main.c and then gpio.h and gpio.c for the gpio access routines.

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

#include “gpio.h”

int main()
{
   int i;
int sleepTime = 200000;

   if (-1 == GPIOExport(17) ||
        -1 == GPIOExport(27) ||
        -1 == GPIOExport(22))
   return(1);

   // Necessary sleep to allow the pin
   // files to be created.
usleep(10000);

   /*
    * Set GPIO directions
*/
if (-1 == GPIODirection(17, OUT) ||
        -1 == GPIODirection(27, OUT) ||
        -1 == GPIODirection(22, OUT))
       return(2);

   for ( i = 0; i < 10; i++ )
   {
        if (-1 == GPIOWrite(17, HIGH))
             return(3);
       usleep(sleepTime);
if (-1 == GPIOWrite(17, LOW))
            return(3);
       if (-1 == GPIOWrite(27, HIGH))
            return(3);
       usleep(sleepTime);
if (-1 == GPIOWrite(27, LOW))
            return(3);
       if (-1 == GPIOWrite(22, HIGH))
            return(3);
       usleep(sleepTime);
if (-1 == GPIOWrite(22, LOW))
            return(3);
   }
return( 0 );
}

// gpio.h

#define IN  0
#define OUT 1

#define LOW  0
#define HIGH 1

extern int GPIOExport(int pin);
extern int GPIOUnexport(int pin);
extern int GPIODirection(int pin, int dir);
extern int GPIORead(int pin);
extern int GPIOWrite(int pin, int value);

/* gpio.c
 *
* Raspberry Pi GPIO example using sysfs interface.
* Guillermo A. Amaral B. <g@maral.me>
*
* This file blinks GPIO 4 (P1-07) while reading GPIO 24 (P1_18).
*/

#include <sys/stat.h>
#include <sys/types.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

#include “gpio.h”

int GPIOExport(int pin)
{
#define BUFFER_MAX 3
    char buffer[BUFFER_MAX];
    ssize_t bytes_written;
    int fd;

    fd = open(“/sys/class/gpio/export”, O_WRONLY);
    if (-1 == fd) {
         fprintf(stderr, “Failed to open export for writing!\n”);
         return(-1);
    }
    bytes_written = snprintf(buffer, BUFFER_MAX, “%d”, pin);
    write(fd, buffer, bytes_written);
    close(fd);
    return(0);
}

int GPIOUnexport(int pin)
{
    char buffer[BUFFER_MAX];
    ssize_t bytes_written;
    int fd;

    fd = open(“/sys/class/gpio/unexport”, O_WRONLY);
    if (-1 == fd) {
         fprintf(stderr, “Failed to open unexport for writing!\n”);
        return(-1);
     }

     bytes_written = snprintf(buffer, BUFFER_MAX, “%d”, pin);
     write(fd, buffer, bytes_written);
    close(fd);
    return(0);
}

int GPIODirection(int pin, int dir)
{
static const char s_directions_str[]  = “in\0out”;

#define DIRECTION_MAX 35
    char path[DIRECTION_MAX];
     int fd;

   snprintf(path, DIRECTION_MAX, “/sys/class/gpio/gpio%d/direction”, pin);
    fd = open(path, O_WRONLY);
    if (-1 == fd) {
        fprintf(stderr, “Failed to open gpio direction for writing!\n”);
        return(-1);
    }

    if (-1 == write(fd, &s_directions_str[IN == dir ? 0 : 3], IN == dir ? 2 : 3)) {
         fprintf(stderr, “Failed to set direction!\n”);
         return(-1);
    }
    close(fd);
    return(0);
}

int GPIORead(int pin)
{
#define VALUE_MAX 30
    char path[VALUE_MAX];
    char value_str[3];
    int fd;

    snprintf(path, VALUE_MAX, “/sys/class/gpio/gpio%d/value”, pin);
    fd = open(path, O_RDONLY);
    if (-1 == fd) {
        fprintf(stderr, “Failed to open gpio value for reading!\n”);
        return(-1);
    }

    if (-1 == read(fd, value_str, 3)) {
        fprintf(stderr, “Failed to read value!\n”);
        return(-1);
    }

    close(fd);

    return(atoi(value_str));
}

int GPIOWrite(int pin, int value)
{
static const char s_values_str[] = “01”;

    char path[VALUE_MAX];
    int fd;

    snprintf(path, VALUE_MAX, “/sys/class/gpio/gpio%d/value”, pin);
    fd = open(path, O_WRONLY);
    if (-1 == fd) {
        fprintf(stderr, “Failed to open gpio value for writing!\n”);
        return(-1);
    }

    if (1 != write(fd, &s_values_str[LOW == value ? 0 : 1], 1)) {
        fprintf(stderr, “Failed to write value!\n”);
        return(-1);
    }

    close(fd);
    return(0);
}

Summary

This was a quick little introduction to C development on the Raspberry Pi. With the GCC compiler, Code::Blocks, GNU Debugger and GNU Make you have all the tools you need for serious C development. I find the Raspberry Pi is sufficiently powerful to do quite a bit of development work with good productivity. As a learning/teaching environment its excellent. It’s really amazing what you can do with a $35 single board computer.

Written by smist08

December 23, 2017 at 10:48 pm

Predictions for 2018

with one comment

Introduction

As 2017 draws to a close, I see a lot of predictions articles for the new year. I’ve never done one before, so what the heck. Predictions articles are notorious for being completely wrong, so take this with a grain of salt. The main problem is that things tend to take much longer than people expect so sometimes predictions are correct, but take ten years instead of one. Then again some predictions are just completely wrong. Some predictions keep reappearing and never coming true, like Linux replacing Windows or Microsoft releasing a successful phone. I think most writers find they get a lot of readers on these articles and then no one bothers to check up on them a year later. I’ll assume this is the case and go ahead and make some predictions. Some of these will be more concrete and some will be continuing trends.

Blockchain/Bitcoin

I’m not going to make any predictions on the value of Bitcoin. The more interesting part is the blockchain algorithm behind it. This algorithm allows a method to ensure reliable transfers of money in a distributed manner. The real disruption will come when services like credit or debit cards start to be supplanted via blockchain transactions that don’t require any centralized authority. Several big companies like IBM are investing heavily in the infrastructure to support this. Right now credit and debit cards charge very high fees and many businesses are highly motivated to find an alternate solution. Blockchain offers a ray of hope to remove the transaction charge/tax that exists today on every transaction. I doubt that credit and debit cards will disappear this year, but I do predict that blockchain will start to appear in a number of business to business financial exchanges perhaps something like Walmart and their suppliers. This will be the start of a long decline for the existing credit and debit card companies unless they innovate and reduce their costs. Right now they are going the route of lobbying governments to make blockchain illegal, but like with the music industry protecting CDs they are fighting an ultimately losing battle.

AI

What we are calling Artificial Intelligence will continue to evolve and become more and more useful. We won’t reach true strong AI this year and the singularity is still a ways off, but the advances are coming quickly both on the algorithms side and the hardware to run them on. Will this be the year of the self driving car? Perhaps in small numbers. We are already seeing self driving taxis in Singapore and Phoenix. I think we are primed for this to take off big time. Some of the big cost savings will come from self driving buses, taxis and trucks. However governments still need to figure out how to alleviate the disruption to the work force this will cause. We will see more and more AI solutions rolled out in sales, inventory replenishment and scientific research. Speech, translation and handwriting recognition systems will continue to get better and better. Predictive systems that suggest movies to watch and music to listen to will get better and better. Products like Alexis and Google Home will become more widespread and their perceived intelligence will improve daily.

Privacy and Security

2017 was a very bad year for data breaches, ransomware attacks, government interference and a general trend to imposing restrictions on the Internet. 2018 will be worse. We have national security agencies like the Russians operating with immunity. We have rogue nations like North Korea launching ransomware attacks. We have the removal of Net Neutrality in the USA allowing ISPs and the government to spy on everything you do. Due to the amounts of money involved and a general lack of oversight or prosecution from governments, 2018 will set new records for data breaches, stealing of personal information, botnets and ransomware attacks.

DIY

In the early days of personal computers the Apple II and IBM PC were quite open hardware architectures with slots for expansion boards and all sorts of interface capabilities. Software was also open, interfaces were documented (either by the manufacturer or reverse engineers) and you could run any software you liked. Now hardware is all closed with no interface slots and you are often lucky to get a USB port. With many modern devices you can’t even replace the battery.

With the introduction of the $35 Raspberry Pi, suddenly DIY and home hardware projects have had a resurgence. Since the Raspberry Pi runs Linux, you can run any software you like on it (ie no regulated App store).

The Raspberry Pi won’t have a refresh until 2019, but in the meantime many companies seeing an opportunity are offering similar board with more memory and other enhancements. Int 2018 we’ll see the continuing explosion of Raspberry Pi sales and an explosion of add-ons and DIY projects. All the similar and clone products should also do well and fill some niches that the Pi has ignored.

Low Cost Computers

The Raspberry Pi showed you can make a fully useful computer for $35. Others have noticed and Microsoft has produced and ARM version of Windows. Now we are seeing small complete computers based on ARM processors being released. Right now they are a bit expensive for what you get, but for 2018 I predict we are going to start seeing fully usable computers for around $200. These will be more functional than the existing x86 based Chromebooks and Netbooks and allow you to run a choice of OS’s, including Linux, Android and Windows. I think part of what will make these more successful is that emulation software has gotten much better so you can you x86 programs on these devices now. Expect to see more RAM than a Pi and SSD drives installed. For laptops expect quite long battery life.

AR/VR

Augmented Reality and Virtual Reality have received a lot of attention recently, but I think the headsets are still too clunky and these will remain a small niche device through 2018. Popular perhaps in the odd game, not really mainstream yet.

Cloud Migration

People’s cloud migrations will continue. But due to the complexity of hybrid clouds and Infrastructure as a Service (IaaS), many are going to reconsider. Companies will rethink managing their own software installations, and just adopt Software as a Service (SaaS). Many companies will move their data centers to the cloud whether Amazon, Google, Microsoft or another. But they will find this quite complex and quite expensive over time. This will cause them to consider migrating from their purchased, installed applications to true SaaS offerings. Then they don’t have to worry about infrastructure at all. Although IaaS will continue to grow in 2018, SaaS will grow faster. Similarly at some point in a few years IaaS will reach a maximum and start to slowly decline. The exception will be specialty infrastures like those with specialized AI processors or GPUs that can perform specific high intensity jobs, but don’t require a continuous subscription.

Summary

Those are my predictions for 2018. Blockchain starting to blossom, security and privacy under greater attack, AI appearing everywhere (and not just marketing material), DIY gaining strength, dramatically lower cost computers, not much in AR/VR and cloud cycling through local data centers to IaaS to SaaS. I guess we can check back next year to see how we did.

 

Merry Christmas and Happy New Year.

 

Written by smist08

December 21, 2017 at 9:55 pm

On Net Neutrality

leave a comment »

Introduction

With Ajit Pai and the Republican led FCC removing net neutrality regulations in the USA, there is a lot of debate about what this all means and how it will affect our Internet. Another question will be whether other jurisdictions like here in Canada follow suite. The Net Neutrality regulations in the USA were introduced by Barack Obama in 2015 to combat some bad practices by Internet Service Providers (ISPs) that were also cable companies. Namely they were trying to kill off streaming services like NetFlix to preserve their monopoly on TV content via their pay by channel model. Net Neutrality put a stop to that by requiring all data over the Internet’s pipes be treated equally. Under net neutrality streaming services blossomed and thrived. Are they now too big to fail? Can new companies have any chance to succeed? Will we all be paying more for basic Internet? Let’s look at the desires of the various players and see what hope we have for the Internet.

Evil Cable Companies

The cable companies want to maintain their cable TV channel business model where they charge TV channels to be part of their packages and then charge the consumers for getting them (plus the consumer has to pay by watching commercials). With the Internet people are going around the cable companies with various streaming services. The cable company charges a flat (high) monthly charge for Internet access usually based on maximum allowable bandwidth. What the cable companies don’t like is that people are switching in droves to streaming services like NetFlix, Amazon Prime or Crave. Like the music companies fighting to save CD sales, they are fighting a losing battle, just pissing off their customers and probably accelerating their decline.

So what do the Cable companies want? They want a number of things. One is to have a mechanism to stifle new technologies and protect their existing business models. This means monitoring what people are doing and then blocking or throttling things they don’t like. Another is to try to make up revenue on the ISP side as cable subscription revenue declines. For this they want more of an App market where you buy or subscribe to bundles of apps to get your Internet access. They see what the cell phone manufacturers are doing and want a piece of that action.

The cable companies know that most people have very limited choices and that if the few big remaining cable and phone companies implement these models then consumers will have no choice but to comply.

Evil Cell Phone Companies

Like the cable companies, the phone companies want to protect their existing business models. To some degree the world has already changed on them and they no longer make all their money charging for long distance phone calls. Instead they charge rather exorbitant fees for cell phone Internet access. Often due to mergers the phone and cable companies are one and the same. So the phone companies often have the same Interests as the cable companies. Similarly to the cable companies without net neutrality, the phone companies can start to throttle services they feel compete with their own services like Skype and Facetime. They also want the power to kill any future technologies that threaten them.

Evil Internet Companies

The big Internet companies like Google and Facebook claim they promote net neutrality. But their track record isn’t great. Apple invented the App market which Google happily embraced for Android. Many feel the App market is as big a threat to the Internet as the loss of Net Neutrality. Instead of using a general Internet browser, the expectation is that you use a collection of Apps on your phone. This adds to the cost of startups since they need to produce a website and then  both a iOS and Android App for their service. Apps are heavily controlled by the Apple and Google App stores. Apps that Apple or Google don’t like are removed. This way Apple and Google have very strong control on stifling new innovative companies that they feel threatened by. Similarly companies like Facebook or Netflix that have the resources to create lots of Apps for all sorts of platforms, so they aren’t really fighting for Net Neutrality so much as ensuring their apps run on all sorts of TV set top boxes and other device platforms. They don’t mind so much paying extra fees as this all raises the cost of entry for future competitors.

Evil Government

Why is the government so keen to eliminate Net Neutrality? The main thing is control. Right now the Internet is like the wild west and they are scared they don’t have sufficient control of it. They want to promote the technologies like deep packet inspection that the ISPs are working on. They would like to be able to be a man in the middle in secure communications and monitor everything. They would love to be able to remove sites from the Internet. I think many western governments are looking jealousy at what China does with their Great Firewall and would love the same control. In the early days of the telephone the dangers of government abuse were recognized and that is why they put in laws that required search warrants from judges to tap or trace phone calls. Now the government has fought hard to not require the same oversight on Internet monitoring. They see the removal of Net Neutrality as their opening to working with a few large ISPs to gain much better monitoring and control of the Internet.

The mandate of the government is to provide some corporate oversight to avoid monopolistic abuses of power. They have failed in this by allowing such a large consolidation of ISPs to very few companies and then refusing to regulate or provide any checks and balances over this new monopoly. As long as the government is scared of the Internet and considers it in its best interest to restrict it, things don’t look good.

Pirates and Open Source to the Rescue

So far that looks like a lot of power working to control and regulate the Internet. What are some ways to combat that? Especially if there is very little competition in ISPs due to all the mergers that have taken place. Let’s look at a few alternatives.

Competition

Where there is competition, take the time to read reviews and shop around. Often you will get better service with a smaller provider. Even if it costs a bit more, factor in whether you get better privacy and more freedom in your Internet access. Really just money controls these decisions and a consumer revolt can be very powerful. Also beware of multi-year contracts that make it hard to change providers (assuming you actually have a choice).

VPN

in many countries VPNs are already illegal. That could happen in North America. But if it does it will greatly restrict people’s ability to work at home. As long as you can use a VPN you have some freedom and privacy. However note that most VPNs don’t have the bandwidth to use streaming video services and would likely be throttled if they did.

The Dark Net

Another option is the darknet and setting up Tor nodes and using the Onion browser. The problem here is that it’s too technical for most people and mostly used for criminal enterprises. But if things start to get really bad due to the loss of Net Neutrality, more development will go into these technologies and they could become more widespread.

Peer to Peer

BitTorrent has shown that a completely distributed peer to peer network is extremely hard to disrupt. Governments and major corporations have spent huge amounts of time and money trying to take down BitTorrent sites used to share movies and other digital content. Yet they have failed. It could be that the loss of Net Neutrality will drive more development into these technologies and force a lot of services to abandon the centralized server model of Internet development. After all if your service comes from millions of IP addresses all over the world then how does an ISP throttle that?

Use Browsers not Apps

If you access web sites more from Browsers than Apps then you are helping promote an open and free Internet. If there isn’t an app store involved it can help keep services available. The good thing is that the core technologies in the Mozilla and WebKit browsers is open source so creating and maintaining Browsers isn’t under the control of a small group of companies. Chromium and Firefox are both really good open source browsers that run on many operating systems and devices.

Summary

Will the loss of Net Neutrality in the USA destroy the Internet? Only time will tell. But I think we will start to see a lot of news stories (if they aren’t censored) over the coming years as the large ISPs start to flex their muscles. We saw the start of this with the throttling of streaming services that caused Net Neutrality to be enacted and we’ll see those abuses restored fairly quickly.

At least in the western world these sorts of bad government decision making can be overridden by elections, but it takes a lot of activism to make changes given the huge amounts of money the cable and phone companies donate to both political parties.

Written by smist08

December 19, 2017 at 7:22 pm