On the interpretation of data
We have seen that computer systems store their data as bits, and group bits
together as bytes and words.
However, it is important to realise that the processor can interpret a
sequence of bits only in context: on its own, a sequence of bits means
nothing.
A single 32-bit pattern could refer to:
- 4 ASCII characters,
- a 32-bit integer,
- 2 x 16-bit integers,
- 1 floating point value,
- the address of a memory location, or
- an instruction to be executed.
No meaning is stored along with each bit pattern:
it is up to the processor to apply some context to the
sequence to ascribe it some meaning.
For example, a sequence of integers may form a sequence of valid processor
instructions that could be meaningfully executed; a sequence of processor
instructions can always be interpreted as a vector of, say, integers and
can thus be added together.
Critical errors occur when a bit sequence is interpreted in the wrong
context. If a processor attempts to execute a meaningless sequence of
instructions, a processor fault will generally result:
Linux announces this as a "bus error".
Similar faults occur when instructions expect data on
aligned data boundaries, but are presented with unaligned addresses.
CITS2002 Systems Programming, Lecture 6, p9, 7th August 2024.
|