On May 30th, 1832, 20-year-old Evariste Galois was fatally wounded in a duel. He died the next day. The reason for the duel is still unknown.

During his life, Galois was an activist, a revolutionary, a soldier, and a mathematician; he spent time in prison; he was expelled from one school for his politics, and failed admission to another because the examiner was too stupid to understand what the hell he was talking about.

Galois left behind a number of significant mathematical manuscripts, including one that developed the concept of "finite fields", which are often called "Galois fields" in his honour.

Mathematical fields are an extension of "rings". A ring is a set of numbers which is closed under addition (and multiplication). That is, if you add two numbers in a ring together, the result is another number in that ring. The set also contains an identity element (zero), which, if added to any element of the set, produces the original element. Integers are an obvious example of a ring; adding two integers together always produces another integer.

A field is a ring which is also closed under division. Rational numbers (anything of the form x/y) form a field, since dividing one rational number by another always produces a third rational number. (The exception is division by zero; if you do that, then the Math Police will appear and Fuck Up your Shit.) In addition to zero, a field adds another "special" element, the

*multiplicative identity*, one.

Integers do not form a field; if you try to

*divide* an integer by another, you don't necessarily get another integer. (Divide one by two and everybody dies.)

Both the ring of integers and the field of rational numbers are unlimited in size (you can always make a bigger number by adding to it).

A finite field, as its name implies, has a finite number of elements. But how can you allow for arbitrary addition, if there is a "largest" element? The answer is to perform addition

*modulo* the field size.

For example, let's take a field of size 7 (named GF(7), or Galois field 7), which contains the elements {0 1 2 3 4 5 6}. In the field of rational numbers, 5+6 = 11, but 11 lies outside of GF(7), which clearly won't do. But (5+6) mod 7 = 4, and 4 is one of our "allowed" elements. Similarly, any other modulo-7 addition using the elements {0..6} produces an answer in the range {0..6}. Multiplication is simply repeating addition some number of times.

Division is a matter of finding a multiplicative inverse for each element in the field (other than zero). Let's say we want to compute 1/3 in GF(7). By brute force, 3+3=6, 3+3+3=2, 3+3+3+3=5, 3+3+3+3+3=1 (finally). So, 3*5=1, or 1/3 = 5.

Because reasons, the modulus addition approach only works for creating finite fields which have a prime number of elements. However, we can use

*field extensions* to construct fields that are a prime power in size. A useful field size is 2^8, since we can now treat a byte (or octet, if you're a Frenchman) as an element in GF(256).

Here's a link, if you want to get an introduction to field extension:

https://johnkerl.org/doc/ffcomp.pdfThe horror begins when we discover that Galois fields aren't just a mathematical curiosity, but that they have Practical Uses.

In digital communications, forward error correction is the process of adding redundant information to a transmitted signal so that if part of the signal is damaged in transit, it can be repaired at the receiver (up to some limit). This is why DVDs can tolerate a small amount of surface scratches, and why your phone doesn't drop the signal everytime a bird poops on the cell tower.

You

*could* accomplish the same effect by cranking up the power, or sending the message repeatedly, but adding 10-25% redundant data (and shuffling the data around a bit) is more efficient.

It turns out that Galois field arithmetic provides us with the tools to perform some types of forward error correction.

The Reed-Solomon and BCH error correction codes both take a sequence of bytes (or bits), treat the sequence as a polynomial in a GF field, and divide that polynomial by a special "generator" polynomial. The remainder of this division is appended to the original message before it is transmitted.

As long as the message doesn't accumulate too many errors in transit, the receiver of the message can perform a further sequence of finite field operations to determine if the message was received properly, and repair the damage. In general, the number of errors that can be corrected depends on how much redundancy is added.

I'm starting to ramble, let's cut this short.

Further reading:

https://en.wikipedia.org/wiki/%C3%89variste_Galoishttps://en.wikipedia.org/w/index.php?title=Galois_fieldhttps://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correctionhttps://en.wikipedia.org/wiki/BCH_codehttps://en.wikipedia.org/wiki/Berlekamp%E2%80%93Massey_algorithmhttps://en.wikipedia.org/wiki/Chien_search