Current Topic: The most flexible aspect of C programming are pointers. However... They are also very dangerous requiring special consideration when manipulating. One false move and it's 'Segmentation Fault' for your program.
Why Are Pointers So Important...
As I remember... In the early days of the C language development there was a debate whether or not to include the power of the pointer. Ultimately though, the argument about the risks vs. the benefits was clear. Without the use of pointers programs would be highly restricted. With pointers It was now possible for a function call to return more than a single result. It also commonly reduced the requirements for call-stacks which was very important when many kernels would give a process maybe a 256 byte stack. Programmers could then make use of small memory allocations and pointers to write programs that would otherwise be impossible if all data had to be passed by value. Without pointers, reenterancy and multi-tasking would also be very difficult to implement.
Hazards When Using Pointers...
When a function is passed a pointer it represents a direct memory address for the data. Usually that data belongs to something else. That means that corrupting the data in the pointer will most certainly cause program failure elsewhere. It's important to understand that compilers do not validate most uses of pointers. The compiler assumes that the program understands what the code is doing and will do exactly what it says not what it means.
Passing Data By Reference...
Blah, blah!
Accessing Pointer Data...
Blah, blah!
Pointer Math Depends On The Pointer Data Type...
A pointer caries it's 'Data-Type' during all math operations. It is 'extremely' important to understand this when the pointer represents an array (of a defined type). In all cases when adding, subtracting or indexing the array the result (*pointer) will represent pointer +,- or [n] multiplied by 'sizeof()' Data-Type.
How does this work... Consider a typical 32-bit machine model with char equal to one byte and long equal to 4 bytes and the following definitions...
char *pc;
long *pl;
In this case there are two pointers of different types (char, long). Referencing these pointers returns completely different results. Breaking it down fundamentally. Using the following expressions...
char cvalue = *pc++;
long lvalue = *pl++;
Assuming... Both 'pc' and 'pl' initally point to address 1024. 'cvalue' will equal the char at address 1024 and pc will equal 1025 (pc + sizeof(char)). On the other hand lvalue will equal the long value at address 1024 and pl will equal 1028 (pl + sizeof(long)). This rule holds true for all array types and their implied sizes.
Arrays And Pointer Math Optimization Methods...
Here's where it gets a little tricky... There are usually multiple methods a programmer can choose when manipulating arrays. As an example...
*array++ = target;
array[index++] = target;
array[offset + index] = target;
From a programmers perspective these may seem synonymous however to a compiler they present ambiguity. This is because there are, usually, multiple choice options when translating to assembly (machine) code. Modern Microprocessors have several 'addressing' modes a compiler can choose from. ie... Absolute, Indirect, Indirect with post-increment or pre-decrement, Indexed Indirect, Indirect with Displacement. To name a few. And many of these modes can be combined. Furthermore... Different Addressing Modes may have additional parameters (operands) that need fetching from the instruction stream adding processing penalties for instruction execution timing (speed).
Most compilers are tuned to recognize certain coding syntax triggering efficient translation of C code to Machine code. Unfortunately this can lead to less readable code. A.K.A. 'Why did the original author do it that way?'. Sometimes it is due to training. Other times maybe personal preference, simplicity, worked naturally with the code. Or maybe it was optimized for a given target machine.
Fortunately... It is rarely necessary to consider these differences for most programs. After all that's what the compiler is for. There are some instances however where the code has to be optimized. Maybe drivers and even services. Or maybe your program just manipulates a lot of data.
In these cases it is important to be able to understand the Assembly code the compiler generates. It's surprising how often code syntax can change twenty processor instructions into a single operation. When iterated in a loop several thousand times and updated several times a second that's hundreds of millions of wasted instructions executed.
If you have a serious need to optimize your code then I would suggest reading the 'Programming Manual' for your target processor. It will describe all the instruction capabilities so you will be able understand the Assembly code the compiler generates which represents a one-to-one relationship to the machine code the processor will execute. It will also explain the execution timings for the different methods of data access. Including additional operand fetch penalties.
Optimizing Data Manipulation...
Again... Here is where a good understanding of how the target machine operates on data is very useful. Many processors have severe (timing) penalties when moving misaligned data where the data size is not a multiple of the data-bus width and aligned on that boundary.
A programmer can also take advantage of this using some pointer trickery. Consider moving 256 bytes of data. The data bus width is 4 bytes (32-bits). If the program assigns two pointers of type 'char' and uses a loop including '*pointer++' then the compiler will generate code that executes 256 move operations. However since the processor is capable of moving 4 bytes in a single instruction the pointers can instead be defined as type 'long' (assumption) causing the compiler to generate code that moves the same data in 64 instructions.
Using trickery requires a little understanding of magic. In this case the magic is the processors 'Programming Manual'. The above coding-suggestion is usually only possible through 'casting' of pointers. This is possibly a fatal mistake. A compiler will (after complaining) allow you to cast a pointer to satisfy a type mismatch. This is where misalignment comes into play. On some processors moving data larger then a byte to an odd address (numerically) can cause a 'BUSS ERROR'. This effect propagates from 16-bit to 128-bit architectures. Sometimes stupid mistakes like this can sometimes be corrected but not without a lot of code being executed. For every move operation. Worst case... Segmentation Fault. This is the reason the compiler complained before the programmer 'cast' (reassigned) the pointer type.
How does a program account for misalignment. It makes sure the pointers are assigned aligned values. Accomplished by determining how many bytes at the head and maybe tail. Move the head excess bytes first until the alignment boundary followed by the aligned moves and then any remaining unaligned tail bytes. In the current example the worst case may be 6 extra moves and the overhead it takes to calculate all the miscellaneous alignment (setup).
Beware though... Changing the example to move only 64 bytes, or smaller, significantly influences this optimization. The overhead needed to calculate and move the unaligned data, and then the remaining data, may take longer (timing) then a quick and simple loop to move the data as bytes.
Typical Pointer Use Examples...
Blah, blah!
7598