Stephen Smith's Blog

Musings on Machine Learning…

Posts Tagged ‘arm

On ARM Based MacBooks

with one comment

Introduction

There’re a lot of rumours circulating that Apple will start releasing ARM based MacBooks towards the end of this year or early next year. There has been a lot of negativity on these possibilities, but I feel this will be an overall positive for Apple and Mac computers. In this blog post, I’m going to look at the commonly mentioned criticisms of this and debunk these as the myths they are.

Performance of ARM vs Intel or AMD

Many claim that Intel and AMD chips are faster than ARM chips. This only applies at the high end. Both Intel and AMD produce excellent chips at the high end, that they charge a fortune for. Then they release lower powered versions for general consumer release. Further, the most powerful AMD and Intel chips are power hungry and don’t appear in laptops due to the heat they produce and how fast they drain batteries.

In most laptops you run less powerful, less power hungry chips. For ARM chips to be competitive, they only need to compare well to Intel i3 or i5 chips. These are what most people really run. I’m writing this on an older i3 based laptop, when I run CPU benchmarks on this laptop, it scores half of the ARM CPU in my Raspberry Pi 4. The ARM CPU in a Raspberry Pi is an older ARM chip to keep the price so low.

The CPU power per watt processing power of ARM processors is far superior to Intel or AMD chips. Further ARM processors typically have more cores and coprocessors than Intel or AMD chips. This is because Intel and AMD want to have such a wide line of processors and take so much out, to keep up demand for their more expensive products.

Availability of Software

There is a claim that no one will compile ARM versions of their software or bother to port their programs from the Intel instruction set to the ARM instruction set. This problem was solved by the Raspberry Pi. When I started with the Raspberry Pi, many common Linux programs either wouldn’t work on the Pi or you needed to build them yourself from source. Now every major Linux open source product, produces ARM 32 and 64-bit binaries as part of their regular build process.

Further, the Apple ecosystem has familiarity with ARM since the iPhone and iPad market is far larger than the MacOS market.

Sure, Microsoft has trouble getting software for their ARM based Surface laptops, but that is unique to the Windows world and doesn’t apply to Linux or Apple.

Apple has the experience to move their ecosystem across CPU architectures. They moved the MacOS world from PowerPC to Intel. This transition looks far easier.

Problems in the Windows World Apply to Apple

There have been a number of attempts to move Microsoft Windows to a non-Intel platform. So far these have all failed. The problem in the Windows world is that there is tons of software out there, much of it from legacy companies that have gone out of business and the source code isn’t available to re-compile. The legacy of rejecting open source software and promoting vendor lock-in has now tied Microsoft’s hands, preventing them from moving forwards.

The other problem is that Microsoft has tried to use this transition as a mechanism of locking customers in. For instance only allowing software to be installed from the Microsoft store. Or limiting the functionality in the ARM version, to promote demand for their more expensive products.

Advantages of ARM Based Laptops

There are several advantages for Apple going with ARM processors in their MacBooks:

  1. The ARM processor uses less power, so battery life will be far longer.
  2. It provides differentiation between Apple products and the millions of Intel/AMD Windows laptops on the market.
  3. It reduces Apple’s cost for CPUs by 60%.
  4. It allows Apple more room to innovate in their laptop line.
  5. The ARM Processors are produced by multiple manufacturers, so Apple can use the best of breed rather than relying on Intel’s lagging process manufacturing technology. 

Summary

I’m looking forward to a wave of ARM based laptops whether from Apple or from the various Linux vendors. I think this is the Intel/AMD duopoly’s last stand. Competition is only good. I’m tired of using crippled chips like the i3 or Celeron and look forward to much greater processing power at a lower cost with longer lasting batteries.

Written by smist08

March 27, 2020 at 2:01 pm

Posted in Business

Tagged with , , , ,

Interrupting the ARM Processor

with 3 comments

Introduction

I recently published my book: “Raspberry Pi Assembly Language Programming”. Information on my book is available here. In my book, I cover the Assembly language instructions you can use when running under Raspbian, so we could always have working examples that we could play with. However there is more to the ARM processor, the details of which are transparently handled by Linux, so we don’t need to (and can’t) play with these. In this article we’ll start to look at how the ARM Processor deals with interrupts. This will be a simplified discussion, so we don’t get bogged down in all the differences between different Raspberry Pi models, with how the various interrupt controllers work, the various ARM operating modes or consider interactions with the virtual memory manager.

What Are Interrupts?

Interrupts started as a mechanism for devices to report they have data asynchronously. For instance, if you ask the disk drive to read a block of data, then the computer can keep doing other work (perhaps suspend the requesting process and continue running another process), until the disk drive sends the processor an interrupt telling it that it has retrieved the data. The ARM processor then needs to quickly process this data so that the disk drive can go on to other work. Some devices need to have the data handled quickly or it will be overwritten by new data being processed by the device. Most hardware devices have a limited buffer or queue of data they can hold before it is overwritten.

The interrupt mechanism has been used for additional purposes like reporting memory access errors and illegal instruction errors. There are also a number of system timers that send interrupts at regular intervals, these can be used to update the system clock, or preempt the current task, to give other tasks a turn under the operating system’s multitasking algorithm. Operating system calls are often implemented using interrupts, since a side effect of an interrupt being triggered is to change the operating state of the processor from user mode to system mode, allowing the operating system to run at a more privileged level. You can see this described in Chapter 7 of my book, on Linux Operating System Services.

How Are Interrupts Called?

If a device receives data, it notifies the interrupt controller which then maps the interrupt to one of the ARM processor interrupt codes. Transfer of control immediately switches to the code contained in a specific memory location. Below is a table  of the various interrupts supported by one particular ARM model. In Raspbian the memory offsets are added to 0xffff0000 to get the actual address.

Each ARM instruction is 32-bits in size, so each slot in the interrupt table can hold a single ARM instruction, hence this has to be a branch instruciton, or an instruction that does something to the program counter. The exception is the last one, the FIQ Interrupt which is the “Fast” interrupt, since fast interrupts need fast processing, it is deemed to slow to do a branch instruction first, so the interrupt handler can be entirely placed at this address, which is why it’s the last one in the table.

Some example instructions you might see in this table are:

B myhandler @ will be a PC relative address
MOV PC, #0x1234 @ has to be a valid operand2
LDR PC, [PC, #100] @ load PC with value from nearby memory

You can read about operand2 and the details of these instructions in my book.

Interrupt Calling Convention

When you call a regular function, there is a protocol, or calling convention that specifies who is responsible for saving which registers. Some are saved by the caller if they need them preserved and some have to be saved by the callee if it uses them. There are conventions on how to use the stack and how to return values from functions. With interrupt routines, the code that is interrupted can’t do anything. It’s been interrupted and has no knowledge of what is happening. Preserving the state of things is entirely handled by a combination of the ARM CPU and the interrupt handler code.

The ARM processor has a bank of extra (shadow) registers that it will automatically switch with the regular registers when an interrupt happens. This is especially necessary to preserve the Control Program Status Register (CPSR). The block diagram below shows the banks of registers for the various interrupt states.

Consider the code:

CMP R6, #66
BEQ _loop

If the interrupt occurs between these two instructions, then the CPSR (which holds the result of the comparison) must be preserved or the BEQ instruction could do the wrong thing. This is why the ARM processor switches the CPSR with one of the SPSR registers when the interrupt occurs and then switches them back when the interrupt service routine exits.

Similarly there are shadow registers for the Stack Pointer (SP) and Link Return (LR) register.

For fast (FIQ) interrupts, the ARM CPU also switches registers R8-R12, so the FIQ interrupt handler has five registers to use, without wasting time saving and restoring things.

If the interrupt handler uses any other registers then it needs to store them on the stack on entry and pop them from the stack before returning.

Interrupting an Interrupt?

When an interrupt occurs, the ARM processor disables interrupts until the interrupt handler routines. This is the simplest case since the operating system writer doesn’t have to worry about their interrupt routine being interrupted. This works ok as long as it can handle things quickly, but some interrupt handlers have to do quite a bit of work, for instance if a device returns 4k of data to be processed. Notice that the shadow registers have separate copies for each type of interrupt. This way if you are handling an IRQ interrupt, it is easy to enable FIQ interrupts and allow the IRQ handler to be interrupted by the higher priority FIQ. Newer interrupt handlers have support for more sophisticated nested interrupt handling, but that can be the topic for another article. Linux can disable or enable interrupts as it needs, for instance to finish initialization on a reboot before turning on interrupts.

Returning from an Interrupt

The instruction you use to return from an interrupt is interesting, it is:

SUBS R15, R14, #4

This instruction is taking the Link Return register, subtracting 4 and placing the result in the Program Counter (PC). The ARM Processor knows about this, so it can re-swap the shadow registers.

Normally we just need to move LR to PC to return, why the subtract 4? The reason is the instruction pipeline. Remember the pipeline model is three steps, first the instruction is loaded from memory, then its decoded and then its executed. When we were interrupted we lost the last instruction decode, so we need to go back and do it again.

This is a bit of a relic, since newer ARM processors have much more sophisticated pipeline, but once this was ingrained in enough code then the ARM processor has to respect it and stick with this scheme.

Summary

This was a quick introductory overview of how the ARM processor handles interrupts. You don’t need to know this unless you are working on the ARM support in the Linux kernel, or you are creating your own operating system. Still it is interesting to see what is going on under the hood as the ARM Processor and Linux operating system provide all their services to make things easy for your programs running in user mode.

Written by smist08

November 22, 2019 at 11:33 am

Out-of-Order Instructions

leave a comment »

Introduction

We think of computer processors executing a set of instructions one at a time in sequential order. As programmers this is exactly what we expect the computer to do and if the computer decided to execute our carefully written code in a different order then this terrifies us. We would expect our program to fail, producing wrong results or crashing. However we see manufacturers claiming their processors execute instructions out-of-order and that this is a feature that improves performance. In this article, we’ll look at what is really going on here and how it can benefit us, without causing too much fear.

Disclaimer

ARM defines the Instruction Set Architecture (ISA), which defines the Assembly Language instruction set. ARM provides some reference implementations, but individual manufacturers can take these, customize these or develop their own independent implementation of the ARM instruction set. As a result the internal workings of ARM processors differs from manufacturer to manufacturer. A main point of difference is in performance optimizations. Apple is very aggressive in this regard, which is why the ARM processors in iPads and iPhones beat the competition. This means the level of out-of-order execution differs from manufacturer to manufacturer, further this is much more prevalent in newer ARM chips. As a result, the examples in this article will apply to a selection of ARM chips but not all.

A Couple of Simple Cases

Consider the following small bit of code to multiply two numbers then load another number from memory and add it to the result of the multiplication:

MUL R3, R4, R5 @ R3 = R4 * R5
LDR R6, [R7]   @ Load R6 with the memory pointed to by R7
ADD R3, R6     @ R3 = R3 + R6

The ARM Processor is a RISC processor and its goal is to execute each instruction in 1 clock cycle. However multiplication is an exception and takes several clock cycles longer due to the loop of shifting and adding it has to perform internally. The load instruction doesn’t rely on the result of the multiplication and doesn’t involve the arithmetic unit. Thus it’s fairly simple for the ARM Processor to see this and execute the load while the multiply is still churning away. If the memory location is in cache, chances are the LDR will complete before the MUL and hence we say the instructions executed out-of-order. The ADD instruction then needs the results from both the MUL and LDR instruction, so it needs to wait for both of these to complete before executing it’s addition.

Consider another example of three LDR instructions:

LDR R1, [R4] @ memory in swap file
LDR R2, [R5] @ memory not in cache
LDR R3, [R6] @ memory in cache

Here the memory being loaded by the first instruction, has been swapped out of memory to secondary storage, so loading it is going to be slow. The second memory location is in regular memory. DDR4 memory, like that used in the new Raspberry Pi 4, is pretty fast, but not as fast as the CPU and it is also loading instructions to process, hence this second LDR might take a couple of cycles to execute. It makes a request to the memory controller and its request is queued with everything else going on. The third instruction, assumes the memory is in the CPU cache and hence processed immediately, so this instruction really does take only 1 clock cycle.

The upshot is that these three LDR instructions could well complete in reverse order.

Newer ARM processors can look ahead through the instructions looking for independent instructions to execute, the size of this pool will determine how out-of-order things can get. The important point is that instructions that have dependencies can’t start and that to the programmer, it looks like his code is executing in order and that all this magic is transparent to the correct execution of the program.

Since the CPU is executing all these instructions at once, you might wonder what the value of the program counter register (PC) is? This register has a very precisely defined value, since it is used for PC relative addressing. So the PC can’t be affected by out-of-order execution. 

Coprocessors

All newer ARM processors include floating-point coprocessors and NEON vector coprocessors. The instructions that execute on these usually take a few instructions cycles to execute. If the instructions that follow a coprocessor instruction are regular ARM instructions and don’t rely on the results of coprocessor operations, then they can continue to execute in parallel to the coprocessor. This is a handy way to get more code parallelism going, keeping all aspects of the CPU busy. Intermixing coprocessor and regular instructions is another great way to leverage out-of-order instructions to get better performance.

Compilers and Code Generation

This indicates that if a compiler code generator or an Assembly Language program rearranges some of their instructions, they can get more things happening at once in parallel giving the program better performance. ARM Holdings contributes to the GNU Compiler Collection (GCC) to fully utilize the optimization present in their reference implementations. In the ARM specific options for GCC, you can select the ARM processor version that matches your target and get more advanced optimizations. Since Apple creates their own development tools under XCode, they can add optimizations specific to their custom ARM implementations.

As Assembly Language programmers, if we want to get the absolute best performance we might consider re-arranging some of our instructions so that instructions that are independent of each other are in a row and hopefully can be executed in parallel. This can require quite a bit of testing to reverse engineer the exact out-of-order instruction capability of your particular target ARM processor model. As always with performance optimizations, you must test the performance to prove you are improving things, and not just making your code more cryptic.

Interrupts

This all sounds great, but what happens when an interrupt happens? This could be a timer interrupt to say your time-slice is up and another process gets to use the ARM Core, or it could be that more data needs to be read from the Wifi or a USB device.

Here the ARM CPU designer has a choice, they can forget about the work-in-progress and handle the interrupt quickly, or they can wait a couple of cycles to let work-in-progress complete and then handle the interrupt. Either way they have to allow the interrupt handler to save the current context and then restore the context to continue execution. Typically interrupt handlers do this by saving all the CPU and coprocessor registers to the system stack, doing their work and then restoring state.

When you see an ARM processor advertised as designed for real-time or industrial use, this typically means that it handles interrupts quickly with minimal delay. In this case, the work-in-progress is discarded and will be redone after the interrupt is finished. For ARM processors designed for general purpose computing, this usually means that user performance is more important than being super responsive to interrupts and hence they can let some of the work-in-progress complete before servicing the interrupt. For general purpose computing this is ok, since the attached devices like USB, ethernet and such have buffers that can hold enough contents to wait for the CPU to get around to them.

A Step Too Far and Spectre

Hardware designers went even further with branch prediction, where if a conditional branch instruction needs to wait for a condition code to be set, they don’t wait but keep going assuming one branch direction (perhaps based on the result from the last time this code executed) and keep going. The problem here is that at this point, the CPU has to save the current state, incase it needs to go back when it guesses wrong. This CPU state was saved in a CPU cache that was only used for this, but had no security protection, resulting in the Spectre attack that figured out a way to get at this data. This caused data leakage across processes or even across virtual machines. The whole spectre debacle showed that great care has to be taken with these sorts of optimizations.

Heat, the Ultimate Gotcha

Suppose your your ARM processor has four CPU cores and you write a brilliant Assembly language program that deploys to use all four cores and fully exploits out-of-order execution. Your program is now using every bit of the ARM CPU, each core is intermixing regular ARM, floating point and NEON instructions You have intermixed your ARM instructions to get the arithmetic unit operating in parallel to the memory unit. This will be the fastest implementation yet. Then you run your program, it gets off to a great start, but then suddenly slows down to a crawl. What happened?

The enemy of parallel processing on a single chip is heat. Everything the CPU does generates a little heat. The more things you get going at once the more heat will be generated by the CPU. Most ARM based computers like the Raspberry Pi assume you won’t be running the CPU so hard, and only provide heat dissipation for a more standard load. This is why Raspberry Pis usually do so badly playing high-res videos. They can do it, as long as they don’t overheat, which typically doesn’t take long.

This leaves you a real engineering problem. You need to either add more cooling to your target device, or you have to deliberately reduce the CPU usage of your program, where perhaps paradoxically you get more work done using two cores rather than four, because you won’t be throttled due to overheating.

Summary

This was a quick overview of out-of-order instructions. Hopefully you don’t find these scary and keep in mind the potential benefits as you write your code. As newer ARM processors come to market, we’ll be seeing larger and larger pools of instructions executed in parallel, where the ability for instructions to execute out-of-order will have even greater benefits.

If you are interested in machine code or Assembly Language programming, be sure to check out my book: “Raspberry Pi Assembly Language Programming” from Apress. It is available on all major booksellers or directly from Apress here.

Written by smist08

November 15, 2019 at 11:11 am

RISC Instruction Encoding

with one comment

Introduction

Modern microprocessors execute programs from memory that are formatted specifically for the processor and the instructions it is capable of executing. This machine code is generated by tools, either fairly directly from Assembly Language source code or via a compiler that translates a high level language to machine code. There are two popular philosophies on how machine code is structured.  One is Reduced Instruction Set Computers (RISC) exemplified by ARM, RISC-V, PowerPC and MIPs processors, and the other is Complex Instruction Set Computers (CISC) exemplified by Intel and AMD processors. In RISC computers, each instruction is quite small and does a tiny bit of work, in CISC computers the instructions tend to be larger and each one does more work. The advantage of RISC processors is that the circuitry is simpler which means they use less power, this is why nearly all mobile devices use RISC processors. In this article we will be looking at some of the tricks RISC computers use to keep their instructions small and quick.

32-Bit Instructions

Most RISC processors use 32-bit machine code instructions. It doesn’t matter if the processor is 32-bit or 64-bits, this only refers to the size of pointers for memory addressing and the size of the registers, in both cases the instructions stay at 32-bits in length. With all rules there are exceptions, for instance in RISC-V processors most instructions are 32-bit, but there is a facility to allow longer instructions where necessary and in ARM processors, in 32-bit mode, there is the ability to limit instructions to 16-bits in length. Modern processors are very powerful and have a lot of functionality, so how do they encode all the information needed for an instruction into 32-bits? This restriction imposes a lot of discipline on the instruction set designers, but the solutions they have come up with are quite interesting. In comparison, Intel x86 instructions are variable length and often 120 bits in length.

Having all the instructions 32-bits in length makes creating an efficient execution pipeline very efficient, since you can load and start working on a set of instructions in parallel. You don’t need to decode one instruction to learn where the next one starts. You know there is a new instruction every 4-bytes in memory. This uniformity saves a lot of complexity and greatly enhances instruction execution throughput.

Where Do the Bits Go?

What needs to be encoded in a machine language instruction? Here are some of the possible components:

  1. The opcode. This tells the processor what the instruction does, whether its add two numbers, load data from memory or jump to another program location. If the opcode takes 8-bits then there are 256 possible instructions. To really save space some opcodes can be less bits, like perhaps if it start 011 then the other bits can go to the immediate value.
  2. Registers. Microprocessors load data into registers and then process the data in the registers. Often two or three registers need to be specified in an instruction, like the two numbers to add and then where to put the result. If there are 32 registers, then each register field will take 5-bits.
  3. Immediate data. Most processors have a way to encode some data in an instruction. Like “LOAD R1, 5” might mean load the value 5 into register R1. Here 5 is data encoded in the instruction, and called an immediate value. The size of these varies based on the instruction and use cases.
  4. Memory Addresses. Data has to be loaded from memory, or program execution has to jump to a different memory location. Note that in a modern computer memory addresses are either 32-bit or 64-bits. These are both too big to fit in a 32-bit instruction (we need at least an opcode as well). In RISC, how do we specify memory addresses?
  5. Bits for additional parameters. Perhaps there are several addressing modes, or perhaps other options for an instruction that need to be encoded. Often there are a few bits in each instruction for this purpose.

 

That’s a lot of information to pack into a 32-bit instruction. How do they do it? My introduction to Raspberry Pi Assembly Language shows how this is done for ARM processors in 32-bit mode.

How to Load a Register

Let’s look at how to load a 32-bit register with data. We can’t fit a full 32-bit value inside a 32-bit instruction, so what do we do? You might suggest that we load the value from memory rather than encode the value in the instruction. This is a legitimate thing to do, but it just moves the problem since we now need to load the 32 or 64-bit memory address into memory first.

First we could do it in two steps, perhaps we can fit a 16-bit value in an instruction and then perform two load instructions to load the value. In an ARM processor, there is a MOV instruction that can load a 16-bit immediate value and then a MOVT instructions that loads a 16-immediate value into the top 16-bits of a register. Suppose we want to load 0x12345678 into register R1, then in ARM 32-Bit Assembly we would encode:

MOVT R1, #0x1234
MOV  R1, #0x5678

This works and we do expect that working in RISC is going to take lots of small instructions to perform the work we need to get done. However this is somehow not satisfying, since this is something we do a lot and it seems wasteful to take two instructions. The other thing is that if we are running 64-bit mode and want to load a 64-bit register then this will take 4 instructions.

Another trick is to make use of the Program Counter (PC) register. This register points to the instructions currently being executed. So if we can position the value near this then we could load it by dereferencing the PC (plus a small offset). As long as the offset fits in the amount of room we have for an immediate value then this could work. In the ARM world, the Assembler helps us generate this code. We write something like:

LDR R1, =mydata

...

mydata: .WORD 0x12345678

Then the Assembler will convert the LDR instruction to something like:

LDR R1, [PC, #20]

Which means load the data pointed to by PC + 20 into R1. Now it only takes one instruction to load the data.  This technique has the advantage that it will remain one instruction to execute when dealing with 64-bit data.

Summary

This was a quick discussion of how RISC processors encode each machine code instruction as a 32-bit value. This is one of the key things that keeps RISC processors simple, allowing them to be quick while at the same time simple, and hence more power efficient.

If you are interested in machine code or Assembly Language programming, be sure to check out my book: “Raspberry Pi Assembly Language Programming” from Apress. It is available on all major booksellers or directly from Apress here.

Written by smist08

November 8, 2019 at 11:55 am

Flashing LEDs in Assembler

with 2 comments

Introduction

Previously I wrote an article on an introduction to Assembler programming on the Raspberry Pi. This was quite a long article without much of a coding example, so I wanted to produce an Assembler  language version of the little program I did in Python, Scratch, Fortran and C to flash three LEDs attached to the Raspberry Pi’s GPIO port on a breadboard. So in this article I’ll introduce that program.

This program is fairly minimal. It doesn’t do any error checking, but it does work. I don’t use any external libraries, and only make calls to Linux (Raspbian) via software interrupts (SVC 0). I implemented a minimal GPIO library using Assembler Macros along with the necessary file I/O and sleep Linux system calls. There probably aren’t enough comments in the code, but at this point it is fairly small and the macros help to modularize and explain things.

Main Program

Here is the main program, that probably doesn’t look structurally that different than the C code, since the macro names roughly match up to those in the GPIO library the C function called. The main bit of Assembler code here is to do the loop through flashing the lights 10 times. This is pretty straight forward, just load 10 into register r6 and then decrement it until it hits zero.

 

@
@ Assembler program to flash three LEDs connected to the
@ Raspberry Pi GPIO port.
@
@ r6 - loop variable to flash lights 10 times
@

.include "gpiomacros.s"

.global _start             @ Provide program starting address to linker

_start: GPIOExport  pin17
        GPIOExport  pin27
        GPIOExport  pin22

        nanoSleep

        GPIODirectionOut pin17
        GPIODirectionOut pin27
        GPIODirectionOut pin22

        @ setup a loop counter for 10 iterations
        mov         r6, #10

loop:   GPIOWrite   pin17, high
        nanoSleep
        GPIOWrite   pin17, low
        GPIOWrite   pin27, high
        nanoSleep
        GPIOWrite   pin27, low
        GPIOWrite   pin22, high
        nanoSleep
        GPIOWrite   pin22, low

        @decrement loop counter and see if we loop
        subs    r6, #1      @ Subtract 1 from loop register setting status register
        bne     loop        @ If we haven't counted down to 0 then loop

_end:   mov     R0, #0      @ Use 0 return code
        lsl     R0, #2      @ Shift R0 left by 2 bits (ie multiply by 4)
        mov     R7, #1      @ Service command code 1 terminates this program
        svc     0           @ Linus command to terminate program

pin17:      .asciz  "17"
pin27:      .asciz  "27"
pin22:      .asciz  "22"
low:        .asciz  "0"
high:       .asciz  "1"

 

GPIO and Linux Macros

Now the real guts of the program are in the Assembler macros. Again it isn’t too bad. We use the Linux service calls to open, write, flush and close the GPIO device files in /sys/class/gpio. Similarly nanosleep is also a Linux service call for a high resolution timer. Note that ARM doesn’t have memory to memory or operations on memory type instructions, so to do anything we need to load it into a register, process it and write it back out. Hence to copy the pin number to the file name we load the two pin characters and store them to the file name memory area. Hard coding the offset for this as 20 isn’t great, we could have used a .equ directive, or better yet implemented a string scan, but for quick and dirty this is fine. Similarly we only implemented the parameters we really needed and ignored anything else. We’ll leave it as an exercise to the reader to flush these out more. Note that when we copy the first byte of the pin number, we include a #1 on the end of the ldrb and strb instructions, this will do a post increment by one on the index register that holds the memory location. This means the ARM is really very efficient in accessing arrays (even without using Neon) we combine the array read/write with the index increment all in one instruction.

If you are wondering how you find the Linux service calls, you look in /usr/include/arm-linux-gnueabihf/asm/unistd.h. This C include file has all the function numbers for the Linux system calls. Then you Google the call for its parameters and they go in order in registers r0, r1, …, r6, with the return code coming back in r0.

 

@ Various macros to access the GPIO pins
@ on the Raspberry Pi.

@ R5 is used for the file descriptor

.macro  openFile    fileName
        ldr         r0, =\fileName
        mov         r1, #01     @ O_WRONLY
        mov r7,     #5          @ 5 is system call number for open
        svc         0
.endm

.macro  writeFile   buffer, length
        mov         r0, r5      @ file descriptor
        ldr         r1, =\buffer
        mov         r2, #\length
        mov         r7, #4 @ 4 is write
        svc         0
.endm

.macro  flushClose
@fsync syscall
        mov         r0, r5
        mov         r7, #118    @ 118 is flush
        svc         0

@close syscall
        mov         r0, r5
        mov         r7, #6      @ 6 is close
        svc         0
.endm

@ Macro nanoSleep to sleep .1 second
@ Calls Linux nanosleep entry point which is function 162.
@ Pass a reference to a timespec in both r0 and r1
@ First is input time to sleep in seconds and nanoseconds.
@ Second is time left to sleep if interrupted (which we ignore)

.macro  nanoSleep
        ldr         r0, =timespecsec
        ldr         r1, =timespecsec
        mov         r7, #162    @ 162 is nanosleep
        svc         0
.endm

.macro  GPIOExport  pin
        openFile    gpioexp
        mov         r5, r0      @ save the file descriptor
        writeFile   \pin, 2
        flushClose
.endm

.macro  GPIODirectionOut   pin
        @ copy pin into filename pattern
        ldr         r1, =\pin
        ldr         r2, =gpiopinfile
        add         r2, #20
        ldrb        r3, [r1], #1 @ load pin and post increment
        strb        r3, [r2], #1 @ store to filename and post increment
        ldrb        r3, [r1]
        strb        r3, [r2]
        openFile    gpiopinfile
        writeFile   outstr, 3
        flushClose
.endm

.macro  GPIOWrite   pin, value
        @ copy pin into filename pattern
        ldr         r1, =\pin
        ldr         r2, =gpiovaluefile
        add         r2, #20
        ldrb        r3, [r1], #1    @ load pin and post increment
        strb        r3, [r2], #1    @ store to filename and post increment
        ldrb        r3, [r1]
        strb        r3, [r2]
        openFile    gpiovaluefile
        writeFile   \value, 1
        flushClose
.endm

.data
timespecsec:   .word   0
timespecnano:  .word   100000000
gpioexp:    .asciz  "/sys/class/gpio/export"
gpiopinfile: .asciz "/sys/class/gpio/gpioxx/direction"
gpiovaluefile: .asciz "/sys/class/gpio/gpioxx/value"
outstr:     .asciz  "out"
            .align  2          @ save users of this file having to do this.
.text

Makefile

Here is a simple makefile for the project if you name the files as indicated. Again note that WordPress and Google Docs may mess up white space and quote characters so these might need to be fixed if you copy/paste.

model: model.o
    ld -o model model.o

model.o: model.s gpiomacros.s
    as -ggdb3 -o model.o model.s

clean:
    rm model model.o

 

IDE or Not to IDE

People often do Assembler language development in an IDE like Code::Blocks. Code::Blocks doesn’t support Assembler language projects, but you can add Assembler language files to C projects. This is a pretty common way to do development since you want to do more programming in a higher level language like C. This way you also get full use of the C runtime. I didn’t do this, I just used a text editor, make and gdb (command line). This way the above program has no extra overhead the executable is quite small since there is no C runtime or any other library linked to it. The debug version of the executable is only 2904 bytes long and non debug is 2376 bytes. Of course if I really wanted to reduce executable size, I could have used function calls rather than Assembler macros as the macros duplicate the code everywhere they are used.

Summary

Assembler language programming is kind of fun. But I don’t think I would want to do too large a project this way. Hats off to the early personal computer programmers who wrote spreadsheet programs, word processors and games entirely in Assembler. Certainly writing a few Assembler programs gives you a really good understanding of how the underlying computer hardware works and what sort of things your computer can do really efficiently. You could even consider adding compiler optimizations for your processor to GCC, after all compiler code generation has a huge effect on your computer’s performance.

Written by smist08

January 7, 2018 at 7:08 pm

Spectre Attacks on ARM Devices

with one comment

Introduction

I predicted that 2018 would be a very bad year for data breaches and security problems, and we have already started the year with the Intel x86 specific Meltdown exploit and the Spectre exploit that works on all sorts of processors and even on some JavaScript systems (like Chrome). Since my last article was on Assembler programming and most of these type exploits are created in Assembler, I thought it might be fun to look at how Spectre works and get a feel for how hackers can retrieve useful data out of what seems like nowhere. Spectre is actually a large new family of exploits so patching them all is going to take quite a bit of time, and like the older buffer overrun exploits, are going to keep reappearing.

I’ve been writing quite a bit about the Raspberry Pi recently, so is the Raspberry Pi affected by Spectre? After all it affects all Android and Apple devices based on ARM processors. The main Raspberry Pi operating system is Raspbian which is variant of Debian Linux optimized for the Pi. A recent criticism of Raspbian is that it is still 32-Bit. It turns out that running the ARM in 32-bit mode eliminates a lot of the Spectre attack scenarios. We’ll discuss why this is in the article. If you are running 64-Bit software on the Pi (like running Android) then you are susceptible. You are also susceptible to the software versions of this attack like those in JavaScript interpreters that support branch prediction (like Chromium).

The Spectre hacks work by exploiting how processor branch prediction works coupled with how data is cached. The exploits use branch prediction to access data it shouldn’t and then use the processor cache to retrieve the data for use. The original article by the security researchers is really quite good and worth a read. Its available here. It has an appendix at the back with C code for Intel processors that is quite interesting.

Branch Prediction

In our last blog post we mentioned that all the data processing assembler instructions were conditionally executed. This was because if you perform a branch instruction then the instruction pipeline needs to be cleared and restarted. This will really stall the processor. The ARM 32-bit solution was good as long as compilers are good at generating code that efficiently utilize these. Remember that most code for ARM processors is compiled using GCC and GCC is a general purpose compiler that works on all sorts of processors and its optimizations tend to be general purpose rather than processor specific. When ARM evaluated adding 64-Bit instructions, they wanted to keep the instructions 32-Bit in length, but they wanted to add a bunch of instructions as well (like integer divide), so they made the decision to eliminate the bits used for conditionally executing instructions and have a bigger opcode instead (and hence lots more instructions). I think they also considered that their conditional instructions weren’t being used as much as they should be and weren’t earning their keep. Plus they now had more transistors to play with so they could do a couple of other things instead. One is that they lengthed the instruction pipeline to be much longer than the current three instructions and the other was to implement branch prediction. Here the processor had a table of 128 branches and the route they took last time through. The processor would then execute the most commonly chosen branch assuming that once the conditional was figured out, it would very rarely need to throw away the work and start over. Generally this larger pipeline with branch prediction lead to much better performance results. So what could go wrong?

Consider the branch statement:

 

if (x < array1_size)
    y = array2[array1[x] * 256];


This looks like a good bit of C code to test if an array is in range before accessing it. If it didn’t do this check then we could get a buffer overrun vulnerability by making x larger than the array size and accessing memory beyond the array. Hackers are very good at exploiting buffer overruns. But sadly (for hackers) programmers are getting better at putting these sort of checks in (or having automated or higher level languages do it for them).

Now consider branch prediction. Suppose we execute this code hundreds of times with legitimate values of x. The processor will see the conditional is usually true and the second line is usually executed. So now branch prediction will see this and when this code is executed it will just start execution of the second line right away and work out the first line in a second execution unit at the same time. But what if we enter a large value of x? Now branch prediction will execute the second line and y will get a piece of memory it shouldn’t. But so what, eventually the conditional in the first line will be evaluated and that value of y will be discarded. Some processors will even zero it out (after all they do security review these things). So how does that help the hacker? The trick turns out to be exploiting processor caching.

Processor Caching

No matter how fast memory companies claim their super fast DDR4 memory is, it really isn’t, at least compared to CPU registers. To get a bit of extra speed out of memory access all CPUs implement some sort of memory cache where recently used parts of main memory are cached in the CPU for faster access. Often CPUs have multiple levels of cache, a super fast one, a fast one and a not quite as fast one. The trick then to getting at the incorrectly calculated value of y above is to somehow figure out how to access it from the cache. No CPU has a read from cache assembler instruction, this would cause havoc and definitely be a security problem. This is really the CPU vulnerability, that the incorrectly calculated buffer overrun y is in the cache. Hackers figured out, not how to read this value but to infer it by timing memory accesses. They could clear the cache (this is generally supported and even if it isn’t you could read lots of zeros). Then time how long it takes to read various bytes. Basically a byte in cache will read much faster than a byte from main memory and this then reveals what the value of y was. Very tricky.

Recap

So to recap, the Spectre exploit works by:

  1. Clear the cache
  2. Execute the target branch code repeatedly with correct values
  3. Execute the target with an incorrect value
  4. Loop through possible values timing the read access to find the one in cache

This can then be put in a loop to read large portions of a programs private memory.

Summary

The Spectre attack is a very serious new technique for hackers to hack into our data. This will be like buffer overruns and there won’t be one quick fix, people are going to be patching systems for a long time on this one. As more hackers understand this attack, there will be all sorts of creative offshoots that will deal further havoc.

Some of the remedies like turning off branch prediction or memory caching will cause huge performance problems. Generally the real fixes need to be in the CPUs. Beyond this, systems like JavaScript interpreters, or even systems like the .Net runtime or Java VMs could have this vulnerability in their optimization systems. These can be fixed in software, but now you require a huge number of systems to be patched and we know from experience that this will take a very long time with all sorts of bad things happening along the way.

The good news for Raspberry Pi Raspbian users, is that the ARM in the older 32-Bit mode isn’t susceptible. It is only susceptible through software uses like JavaScript. Also as hackers develop these techniques going forwards perhaps they can find a combination that works for the Raspberry, so you can never be complacent.

 

Written by smist08

January 5, 2018 at 10:42 pm