DIGITAL ALGEBRA
too far gone
Let's talk about computers. Of course when talking about computers and how they work one might be compelled by a strange force of supernatural will to look at examples how the first computers worked. A quick search will invariably point you to the Turing Machine created by Alan Turing. Although to say created, would of course be a bit of an over statement. The "Turing machine" is a purely hypothetical mathematical construct. To my knowledge no one has ever created a real turing machine, nor indeed could they, unless of course the whole fabric of reality is that turing machine. You might then, as if possessed by the specter of the machine spirit watch several youtube videos describing how a turing machine works. If you can glean any amount of information at all about it, you should be commended. No one /really/ knows how a turing machine works, only that it supposedly does, and that it supposedly can compute any "computable" (whatever the hell that means) thing. The only person in all of history who ever knew how a turing machine works was ada loveless, and that was only by virtue of being the living avatar of the machine spirit given flesh. No the turing machine is a terrible terrible way to learn about how computers work. They don't even have screens! Instead it's much better to define some other terms and talk about those first, then work our way up to an abstract approximation of something at least resembling a modern-ish computer. So first lets talk about DIGITAL LOGIC. Digital logic is opposed to analog logic. Theoretically it's possible to have adders that take in a couple of continuous values and outputs a continuous value. The thing is it's rather hard to build complex systems that way. For some examples of analog systems like that you could look at old CRT style video processing. Digital logic is an abstraction. You are no doubt familiar with ones and zeros. There is no such thing as a one on a wire, there is only a voltage potential which we call one in a given context. That same voltage potential on one system could be considered a one or a zero. This isn't to say there is no reason for the specific values of voltage potentials called one or zero on a given system. The specifics have to do with saturation values in variously doped silicon. We don't need to concern ourselves with this for our present discussion, but if you want to dive down that particular "electronics engineering" rabbit hole here are some terms to give you a place to start: bipolar junction transistor (BJT), field-effect transistor (FET), Band gap, Silicon Doping. The only thing that really maters for the discussion at this point is they are small bits of crystal where when they get to certain voltage potential they have state changes for the output, and that we call all voltages below that threshold zero and all values above one. (well not all values, since values too high will blow it up. Let's be reasonable.) So we have things we can call one and zero. But what can we do with these things? Or rather, how can we make things that do what we want? LOGIC GATES Let's introduce a few basic operations we can generally perform regardless of chemistry, media, or architecture. generally because in practice it may not be possible to do these simply, but because of interesting symmetries of the combinations of them it's possible to use technologies that operate in one way to simulate another. This will make more sense with specific examples we will touch on later. The first thing we might want to be able to do is invert a signal. That is if we get a 1 we want to output a 0. We call this a NOT-GATE. it is drawn like this ( triangle with circle at the tip ) This is the Truth Table For the NOT-GATE IN | OUT -------- 0 | 1 1 | 0 A truth table is a way of representing the possible states of a digital logic system. For the simple gates they tend to be trivial, they are generally more useful as a starting place for designing a circuit as there are some ways we can use them to mathematically reduce them by combining similar states, and such. We will talk about that in a little while as well. For now we have the set of inputs on the left and desired outputs on the right. For the following gates I'll use three input versions, since I feel like while the two input representation is adequate, the three input can make some things a little more obvious. Next we have the OR-GATE, which is drawn like this ( crescent moon ) The OR-GATE takes in two or more values and outputs one if any of them are on at all. The truth table for a three input OR-GATE would look like this ABC | OUT --------- 000 | 0 001 | 1 010 | 1 011 | 1 100 | 1 101 | 1 110 | 1 111 | 1 as the astute among you might notice a 3 input OR-GATE could be composed of two two input OR-GATES, where the output of the first feeds into the input of the second. Additionally a 2 input OR-GATE could be created out of a three input one by combining the input lines of two of the inputs. This kind of composition of gates is fundamental to any complex digital logic construction. The last of the fundamental gates is as you could probably guess is the AND-GATE. The AND-GATE takes in 2 or more inputs and outputs 1 if all the inputs are 1 and zero otherwise. The drawn representation of the AND-GATE looks like this ( half moon ) The truth table for a three input AND-GATE looks like this. ABC | OUT --------- 000 | 1 001 | 1 010 | 1 011 | 1 100 | 1 101 | 1 110 | 1 111 | 0 Now it could be said that the AND-GATE isn't fundamental, since it can be constructed out of nothing but the OR and NOT-GATES, by NOTing both inputs then ORing the outputs of the NOTs then NOTing the output of the OR. But I don't know anyone who wouldn't call NOT OR and AND fundamental gates. The NOTed output of AND and OR are common enough that they have their own drawn representation and names NAND and NOR. Their truth tables are simple enough to derive given the above I won't bother. If you are struggling with just seeing how simple they are, take some time and work them out yourself as an example. Things won't be getting any easier. A clever person might look at the NAND or NOR gate and make the observation, "why bother with any other gate at all? I can make all the other ones just by combining this one in different ways" Yes, you are right. Using only NAND or NOR you can make literally any digital logic circuit at all. Which one actually ends up on the chip is a function of implementation, chemistry, physics, and probably a bunch of other factors. Anyway, there's another notable gate we should mention, not because it can't be constructed with the previous gates, but because it's function is used often enough and it's simple enough to understand as a single unit (and because in some construction methods it's easy to make the NOTed version of it), that not including it makes some digital logic diagrams more complicated not to use it. That is the EXCLUSIVE OR-GATE or XOR-GATE. It looks like the OR-GATE with an additional bar on the input side. The truth table for a three input version of it is as follows. ABC | OUT --------- 000 | 0 001 | 1 010 | 1 011 | 0 100 | 1 101 | 0 110 | 0 111 | 1 The output is 1 if and only if the number of 1s in the input is odd. The XNOR-GATE is exactly as above with the outputs inverted. As an exercise try using the 3 "fundamental" gates to generate the output of a two input XOR-GATE. If you've never done digital logic stuff, it's a good exercise which will help really understanding how to combine them to make arbitrary outputs. There are various ways to do it, some "better" than others, but as a hint to get 'a solution' often in digital logic you want both the value and it's negation for combining in a layer of AND-GATEs matching inputs then combining them in an OR-GATE layer. At this point we know enough about the logic gates to start making "useful" things. So lets do the incredibly stereotypical thing and build an adder. But before that it might make some sense to actually define what we mean by adding in binary, since we are working in binary. I mean it might be obvious, but it's probably better not to assume. Adding in binary is very easy since we only have to really consider 4 possibilities and 3 of them involve adding 0 to something. Adding 0 to any value will leave the other value unchanged, so for these 3 possibilities we get easy results. The last possibility is adding 1 to 1, Just like adding in decimal, if the number goes above the highest single digit value you roll over the value to the next place, so 1 + 1 is 10. So an adder for 2 bits needs at least 2 outputs, one for the place value of those digits, and a second one for any carry value, or "overflow". There is another input to consider when thinking about building a sensible adder however, and that's the possible carry value from a previous bit. For a one digit adder, this would never be used at any value other than zero, but if we take it into account in the first place it means we can create adders of any bit length just by stacking the adders and feeding the overflow from one bit into the carry in from the previous. So how do we go about designing this? Well lets start with what the truth table looks like. A is first value B is second, C is carry. Z is place value, and Y will be the carry out. ABC | YZ --------- 000 | 00 001 | 01 010 | 01 011 | 10 100 | 01 101 | 10 110 | 10 111 | 11 So, how do we actually use this to help? You could spend some time looking at it and trying to make some kind of connection of gates that satisfies the requirements, but we can do a little bit better. Introducing Karnaugh Maps, or K-Maps for short. K-maps are a way of organizing truth tables so that it becomes very clear what relationships drive a given output. First thing about K-Maps is we aren't going to be using normal increasing binary values as we have in the truth table. instead we will use a grey code. Grey code is a way of counting where we only allow one value to change from one position to the next. So after 00 would come 01 as normal, but after 01 we go to 11 instead, and then we go to 10 after that. It's possible to make grey codes for binary values of any length of bits by following the same pattern. There is a "canonical" pattern, but all that's really important is to make sure only one value changes from position to position and that you don't end up repeating the same value multiple times. Next instead of listing all the values in a single column we split them into labels for a grid where the output value ends up in the middle. I'm not sure if that explanation makes sense in words, but it should become clear once I provide the example. Lastly K-Maps only work for a single output bit at a time, so for this example we will need two K-maps one for Y and the other for Z. Y AB\C_0___1_ 00| 0 | 0 | 01| 0 | 1 | 11| 1 | 1 | 10| 0 | 1 | Z AB\C_0___1_ 00| 0 | 1 | 01| 1 | 0 | 11| 0 | 1 | 10| 1 | 0 | So how exactly does this help us? Well, lets talk about the Y output first. We can see that the ones are all in a clump as are the zeros. These groups, thanks to grey code represent places where some input "falls out" of relevance. Looking at the line where A and B are both 1 for example we can see that it doesn't mater if C is 1 or 0. Since we are dealing with binary values the clumps we care about need to have a size of power of two to be relevant, so that line of 3 ones is best understood as 2 overlapping clumps one of which where B and C are one, and the other where A and C are one. It is possible to also have clumps where it's 2 by 2, or even 8 by 32, what's important is that the clumps are powers of two in each dimension. It also doesn't matter if the clumps over lap. It's a bit cumbersome to talk about these clumps in sentence form so of course Digital Logic people have a special way of notating these kinds of expressions using a branch of maths called Boolean Algebra, which doesn't really seem like algebra but apparently is related somehow. Anyway, for Y it would look like this : Y = AB + BC + AC When the inputs are next to each other it represents an AND-GATE, it can also be drawn like a dot as if it were multiplication in normal algebra. the + represents and OR-GATE. To represent a NOT-GATE we would normally put a macron over the top of the NOTed expression, or where that doesn't work with an ' mark. So we can define Y' as Y' = A'B'+A'C'+B'C' by using the groups of zeros instead of ones. This also shows that the groups can wrap from bottom to top of the K-Map. They can also wrap from left to right. This is because we are using grey code, so there's only one bit change even from the top and bottom of the grid. It should probably be mentioned here that what I'm using here is called a "Sum of Products" method of analyzing the K-map. There is also a "Products of sum" form, which can be useful if there are fewer clumps of zeros than of ones. They are equivalent due to symmetries of the combinations of AND-GATES and OR-GATES. For this K-Map to find the PoS form of the expression we look at the zeroes in the map and take the complements (NOTed version) and OR them, we do that for each group and AND the groups, which looks like Y = (A+B)(A+C)(B+C). Which form of expression to reduce to ultimately will depend on the underlying technology we plan on building the circuit with. There are a few rules about how it's possible to transform these expressions. Like (A'B')' = (A+B). It's also possible to use a distributive rule to pull out terms for example if we had ABC'+AB'C we could pull out the A and that would result in A(BC'+B'C). This later expression can be further reduced to A(B xor C) due to the construction of an XOR-GATE. (you did do the exercise building a two input XOR-GATE right? If not take a look again using that (BC'+B'C) simplification to verify it. I did tell you it was important.) Anyway, lets look at the Z K-Map. Uh oh. All those ones are all by themselves, and not in any bigger groups at all. If we look at the inputs for the ones it's possible to see that it's one in all the cases where there is an odd number of ones and zero otherwise. But it's not exactly something that jumps out at you. The diagonal "checkerboardy" pattern can be considered a kind of clue that something like that is happening, but it doesn't exactly jump out and announce itself. No matter, we can do some boolean algebra to arrive at a simplified expression. First lets add the expressions for each group of ones. There are 4, each defined with 3 terms. Z = A'B'C + A'BC' + ABC + AB'C' Next we can pull out some terms to reduce what we are looking at a little, in this case it doesn't matter which we end up using, A is first and looks ripe so may as well go with that. In some cases not picking the term /most/ shared will lead to a bit more work, but in this case every term is just as distributed as the others. Z = A'(B'C+BC') + A(BC+B'C') Now we are going to need to apply some rules I haven't talked about. Firstly we have a rule called De Morgan's law, which says that the AND of two inputs is equal to the NOR of the NOTs of those inputs. Similarly the OR of two inputs is equal to the NAND of the NOTs of those inputs. We can show this is true with a truth table. AB | AND --------- 00 | 0 01 | 0 10 | 0 11 | 1 A'B' | NOR --------- 11 | 0 10 | 0 01 | 0 00 | 1 So first we are going to use this on the insides of the (BC+B'C') which results in (B'+C')' + (B+C)'. Next we will use it again on the whole term we just got with will give us [(B'+C')(B+C)]'. After that we can distribute the terms, in the same way as in algebra with terms like (1+x)(2+y)->(2+y+2x+xy), doing this with (B'+C')(B+C) results in B'B+B'C+BC'+C'C. Any value and it's negation ANDed together is always 0, so the B'B and C'C terms drop out of the equation. Additionally, 0 OR anything is just whatever anything is. This leaves us with B'C + BC' for the inner value. After all this nonsense the full equation is Z = A'(B'C + BC') + A(B'C + BC')'. Lastly the identity of XOR is A'B+AB'. The inner terms satisfy that for B and C, and the outer term ALSO satisfies that for A and (B xor C) so the whole thing reduces to A xor (B xor C) or just Z = A xor B xor C
tags:
documentation mind numbers safe utility