[Music] In this video I will try to give a broad overview of how a computer works from the ground up. Subtitles are available. It will get a little bit technical, but I will try to keep things as simple as possible.
There will be a summary at the end of every section and links to further explanations in the description. Hopefully this overview will teach you something new or give you a new perspective on things. Probably nobody in the world knows every single nuance of how a computer is built and operates.
This is why when you hear this sound [Beethoven's 5th symphony] that is a warning that I am making massive oversimplifications which I'm sure you'll point out to me in the comments. [Beethoven's 5th symphony] That was the warning for the entire video. [Music] Modern computers at their core are electrical circuits.
A very simple circuit might consist of a battery connected to a switch, then a light bulb, then back to the negative terminal of the battery. Modern computer components typically supply a voltage of 3. 3 or 5 volts between the positive terminal and the ground or negative terminal if you like to think of it that way.
Computers treat information as binary: on or off, true or false, one or zero, based on if the voltage is high or low. The important component in our circuit here is the push switch which comes in two flavors. push to make that's when a person presses down and the circuit is completed between the two terminals and the bulb lights up.
The push to break switch is exactly the opposite. This is the symbol for a transistor: more specifically, it's a Metal Oxide Semiconductor Field Effect Transistor or MOSFET. Its source and drain are like the two terminals of a switch, but it has a third terminal called a gate.
just like switches there are two types: the N-channel type makes a circuit when a high voltage is applied to the gate, like a push to make. The P-channel breaks the circuit when a high voltage is applied like a push to break. So, transistors act like switches except instead of a person pressing it down a voltage is applied.
In modern computers, these types of transistors are used in pairs referred to as CMOS or Complementary Metal Oxide Semiconductor. The gate terminals of a given pair are always connected together so they receive the same input. Here is a simple example: if we apply a high voltage or 1 to the input the N-channel MOSFET opens a route between high voltage and the output.
The P-channel closes the root to ground. If we apply a low voltage or 0 at the input the P-channel opens a path to ground. OK, so this circuit seems pointless we got the same thing at the output as we did at the input, but let's swap around the two MOSFETs.
Now, when we apply a high voltage or 1 to the input the N-type opens a route to low voltage or 0. When we apply a 0 to the input we get a 1 at the output. Whatever we put in at the input, we get the opposite at the output.
Later we will come back to how this forms the basis of all modern processors. Understand that it is vast networks of CMOS pairs like these which allow computers to think. [Beethoven's 5th symphony] For now, let's see how transistor networks are made in basic terms.
Silicon is a semiconductor with four valence electrons which is doped, or mixed with, other elements. Gallium, for example, has three valence electrons - one less than silicon, hence more positive or P-type. Meanwhile, an element like arsenic has five valence electrons - one more than silicon, therefore adding a negative charge, hence N-type.
Trace amounts of these elements typically one for every million silicon atoms or so are implanted by chemical diffusion or by firing them out of a particle accelerator. Transistors are made by sandwiching a region of one of the dopants between two of the other. Let's look more closely at the N-channel configuration.
This by itself does not conduct electricity, but when an electric field is applied to it a conducting channel forms, hence the name: Field Effect Transistor. This can be thought of in simple terms as the electric field attracting electrons from the N-doped region, which has a surplus, to the P-doped region, which has a deficit. Near the semiconductor stack we have a conducting electrical terminal.
When it's at high voltage, an electric field forms and the transistor conducts, but to stop a short circuit we need to add an insulating layer between them. The contact is metal, the insulator is silicon oxide and finally we have the doped semiconductor, hence the term MOSFET. For a computer we need millions of interconnected CMOS pairs and they must be small and close together so that electrical signals can travel between them quickly.
Making a processor typically begins with a solid block of P-doped silicon called a die. Transistors are then created on top of the die by a process called lithography: "writing on stone". There are two main techniques used in lithography.
The first is deposition: chemically building up a layer of controlled, predetermined thickness across the entire die. This means that, for example N-doped silicon can be coated all over the top of the P-doped die. Unwanted surface material is then removed by a second technique called etching.
This is done by applying chemicals or hot plasma to the top of the die. several steps are taken to ensure that the etching is targeted at specific areas and therefore material is removed selectively. A chemical termed a photoresist is deposited onto the die.
light is shined through a mask exposing predetermined areas of the photoresist. The exposed areas must be the same size or larger than the wavelength of light used, hence very short wavelength ultraviolet light is used for the lithography. A special etching method is used to remove only regions of photoresist which were exposed and nothing else.
This opens up gaps in the photoresist and then another etching method is used which targets everything except the unexposed photoresist. Finally, the remaining photoresist is etched away. As a result, material has been removed wherever there are any gaps in the mask.
The two techniques of deposition and etching used in conjunction mean that the N-type dopants for all the transistors on the die can be added simultaneously. Next, the silicon oxide is deposited and etched and finally metal is deposited and etched away to leave metal channels on top of the die. These metal features form not just the conducting terminals of the transistors, but also the tracks equivalent to wires in a circuit which connect the transistors as required.
Every time, photoresist is deposited, exposed through a mask, gaps are made and a die is etched. Finally, the billions of transistors are assembled and connected together by this method. There may be three CNOS pairs with three different inputs then going out to four different CMOS pairs somewhere else and so on and so on.
[Beethoven's 5th symphony] To summarize: computers handle information as ones and zeros when voltage in an electrical circuit is either high or low. Transistors act as electronic switches, changing between these two states. The transistors always come in complementary pairs, so when one half of the pair opens, the other closes.
Lithography is the process by which huge numbers of interconnected transistors are built up on a block of silicon. [Music] To understand why transistor networks are useful, we need to talk about a branch of mathematical logic called Boolean algebra. Rather than go into the full gory mathematical details, I will just illustrate the basic concepts with some not too serious examples.
Let's say some friends are choosing a restaurant. Alice is happy when the restaurant serves burgers. Bob is happy when the restaurant does not serve burgers.
We can think of evaluating restaurants in this case as something called a gate: an object which takes a binary input true or false, 1 or 0 and returns a binary output. In this case, the input is whether the restaurant has burgers and the output is if someone is happy with the restaurant. For Alice the gate is simple: the output is the same as the input.
In electronic terms the circuit is just a connection. For Bob the gate is what's unimaginably called a NOT gate. This is a symbol typically used to denote it.
In the previous section we have seen how this CMOS pair acts as a NOT gate. Let's consider a more interesting example, when the logic gates can have multiple inputs. Such gates have pretty straightforward names as we'll see.
Now the friends are choosing restaurants which might serve pizza, noodles, both or neither. Alice is happy when the restaurant serves at least one of the two options or both. This idea is represented by what is called an OR gate.
This is the symbol. Bob wants choice, so he wants both pizza and noodles on the menu, hence the AND gate. Charlie likes it if the restaurant serves pizza or noodles but not both: let's say she thinks that they wouldn't do a good job if they weren't specialized enough.
The corresponding gate is called exclusive or XOR for short. Each of these gates also has a complementary gate: basically the same gate with a NOT gate after it. For example the NAND gate gives a 0 when all its inputs are 1, and gives a 1 otherwise.
These complementary gates are denoted by the usual symbol with a circle at the output. For a computer, these and all other possible gates must be implemented through CMOS pairs. Connecting two pairs as follows creates a NAND gate.
The N-channel and P-channel transistors still work in pairs. Only when a high voltage or 1 is applied to both pairs of inputs do the paths to high voltage close and the path to ground opens so a 0 is output. This is exactly the behavior of a NAND gate.
The NAND gate is important because it's what's called universal: this means that any other gate can be created out of NAND gates as long as there are enough of them. Now, there are more efficient ways to make certain gates, but in essence this means that the problem of implementing logic gates through CMOS pairs is solved. There are also gates with three, four, or as many inputs as necessary.
For example, take two AND gates connected as shown here and they simply become a three input AND gate and so on and so forth. [Beethoven's 5th symphony] To summarize: logic gates are conceptual mathematical objects which allow us to make deductions based on binary true or false statements. For example if it's true that today is Saturday or today is Sunday, then it's also true that today is the weekend.
We have now seen how the CMOS pairs from the last section can be used to practically implement these gates in terms of electronic binary logic. [Music] Now we shall see how computers handle numbers and use logic gate networks to do arithmetic. We are all used to base 10 numbers meaning that there are 10 digits from 0 to 9.
The digits represent multiples of powers of 10 which are 1, 10, 100, 1000, and so on. Modern computers use base 2 numbers with just 2 digits: 0 and 1. Binary digits are multiples of powers of 2 which are: 1, 2, 4, 8 and so on.
For example, in base 10 the number 953 is 9 times 100 plus 5 times 10 plus 3 times 1. In binary, the number 1 0 1 is 1 times 4 plus 0 times 2 plus 1 times 1 which is 5 in base 10. Any number in base 10 can be represented by a number in base 2 and vice versa.
To represent the number 1 0 1 3 electrical tracks in order would have their voltage on off on. Each of these tracks carries a binary digit or bit. With three electrical tracks this is called a 3-bit number.
There are similarly 4-bit numbers 8-bit numbers and so on. Of course the 4-bit number 0 1 0 1 is equivalent to the 3-bit number 1 0 1. Let's consider adding together a pair of 1-bit numbers A and B to get C.
0 + 0 = 0 0 + 1 = 1. What happens when we try to add 1 + 1 in binary? The same thing as happens when we add 5 + 5 in regular base 10.
There is no digit big enough, so we put a 0 in that column and carry the 1 to the next column. so to guarantee being able to store the result for any possible values of A and B the result C must be a 2-bit number. The bit on the right is 1 when A or B is 1, but not both: this is the behavior of the XOR gate if you recall Charlie from the last section.
The bit on the left is 1 only if A and B are 1, ergo the AND gate. Therefore C is obtained by connecting A and B to gates like this. When adding numbers A and B larger than 1-bit things get more complicated with three input gates because there is now a bit from A, a bit from B and a carried bit.
As I said in the previous section: NAND gates are universal, meaning any required gate network can be implemented using enough of them. [Beethoven's 5th symphony] In base 10, multiplying by 10 is easy: just add a 0 onto the end of your number. In base 2 the same is true for multiplying by 10 in binary or 2 in base 10.
So, multiplying by the numbers 10, 100, 1000 in binary just involves shifting each bit to the left and putting a 0 on the end, which is unimaginably called a bit shift. What about multiplying by an arbitrary number, say for example 11 in binary? This is the same as multiplying by 10 multiplying by 1 and adding the two together.
So, binary multiplication of A by an arbitrary number B works as follows: begin with zero. If the rightmost bit of B is 1, add A. Bit shift A once and if the next bit of B is 1, add this to the total.
Continue for every bit of B. Computers also rely on making comparisons of numbers, such as: Is A equal to B? Is A greater than B?
And so on. The computer might have the current time in seconds stored as a binary number A updated every second and the time to set up an alarm stored as B. The alarm would be made to go off when A is equal to B.
To do this, the computer must check that every bit of the number A is the same as the corresponding bit of B. So, the XOR gate returns 1 when an input is 1 and the other input is 0, in other words when its inputs are different. The XNOR gate which is the complement of the XOR meaning it returns the opposite to the XOR therefore returns a 1 when its inputs are the same.
A bunch of XNOR gates applied to each pair of bits of A and B return whether those bits are equal or not. The and gate returns a 1 when all its inputs are 1. So applying it to the XNOR gates returns a single bit which is 1 if and only if A is equal to B.
There are also ways to implement subtraction, division and other comparisons all using gate networks. For example bit shifting the other way divides a number by two. These are all implemented using transistors etched on a chip.
Just a note: when I say the word number in this video I am referring to an integer or whole number. There is also a way to store fractions in a binary version of scientific notation called floating point numbers or floats. I won't go into how they work but the general principle outlined here still applies.
To summarize: a sequence of binary on off electrical signals is used by computers to represent a number in base 2 as opposed to base 10 which is used by people day to day. Networks of logic gates can be used to add, subtract and compare binary numbers, and with a few extra steps multiply and divide them too. [Music] We have seen how circuits composed of transistors can be used to implement addition and other algebra for any number with arbitrary binary digits or bits.
If you want to build a computer we would have to choose exactly how many electrical tracks and gates to have available for representing a number. This is like when you're filling in a form and it has a set number of boxes physically printed on it for putting in your age. For commercial computers this is always a multiple of 8.
8 bits are called a byte. The computer will have adder circuits and so on with the types I described in the last section which take 1 byte or 8-bit numbers, 2 byte or 16-bit numbers, 4 byte or 32-bit numbers as inputs. No matter of how many bytes a number is comprised one of its bits can be set aside to denote a plus or minus sign.
When a number is signed in this way, if the leading bit is 0 then the number is positive. If the leading bit is 1 then the number is negative. As anyone who has ever taken a mathematics exam knows: it's important to store your workings.
Whichever operation a computer undertakes, the inputs and outputs must be somehow stored. To do this, a type of circuit called a latch is used. Once it has latched, it will stay that way for a long time and can therefore be used to store a bit of data.
8 latches store a byte and so on. This is an example of what's called a D-latch comprised of NAND gates. You may notice that this looks rather strange: before we've seen gates in a strictly sequential manner where the output of one gate goes directly to another.
Now there is an output from one going to the input of another, but then the output comes right back. It is this kind of interconnection that allows this network of gates to hold and retain its output value 0 or 1. Nothing will change unless the E or enable input is 1.
When enable is 1, whatever bit is applied to D will propagate to the output. The computer will carry out an operation, addition for example, enable a bunch of latch circuits to store the output and then switch the enable back off. The result will then be held there as long as is required.
8 of these latches can form register which will store a byte of data as long as it's required. An add circuit of the type we looked at in the previous section would have such registers to store the inputs before the addition takes place and a register to capture the result of the addition. It takes some small but significant amount of time for electrical signals to propagate, for transistors to finish switching and so on.
To guarantee that the operation is fully carried out before attempting to read out the answer, all computers have a clock. In basic terms an oscillating crystal switches the voltage on and off, on and off, at a very specific rate. At the time when this video was made computer processors have typical clock speeds of 2 Gigahertz, meaning that the clock switches at 2 billion times a second.
The regular switching allows events to be synchronized. To fully carry out an addition operation it will take at least 3 clock cycles. On the first clock cycle numbers would be loaded into the input registers, then the addition circuit can work over the next clock cycle.
Finally on the third clock cycle the output register gets an enable signal and stores the result. Actually, most operations take much longer than one or even three clock cycles to carry out. We have seen in the previous section that multiplication takes several adding steps to accomplish.
Nonetheless, whether it takes one or a hundred cycles the clock is still used to synchronize when an operation begins until the output is ready. All the so-called arithmetic logic, that is: all the arrays of logic gates which do addition and subtraction is physically located together with the registers on a chip called the processor or Central Processing Unit. There are also circuits in there called the cache, which nowadays can store tens of kilobytes of data as the intermediate stages of calculations for example.
There is a need for a larger and longer term way to store many more bytes of data than the registers and the cache. Computers have separate physical chips of random access memory or RAM which makes use of capacitors to store this data. If a capacitor is charged, it has a high voltage and therefore a value of 1.
If it's discharged it has 0 volts relative to ground and so is 0. The charge leaks or is lost over time, but is topped up by the memory circuits as long as the computer is powered. A typical laptop computer has tens of gigabytes or billions of bytes of RAM.
Every byte stored in RAM has its own address like a post box in an apartment block. The address is itself a binary number. The RAM has many transistors forming gate networks which ensure that when it receives the address in binary format from the processor, it then returns the byte stored at that address.
Until recently, computers wouldn't usually have more than 4 gigabytes of RAM. Each of those bytes requires a unique address, so memory addresses were 32-bit numbers. Such 32-bit numbers go up to just over 4 billion so that used to be fine.
When RAM got bigger there effectively weren't enough addresses, so now 64-bit numbers are used for memory addresses. This is the difference between 32 and 64-bit programs. So, let's say you're playing a video game where you have a number of coins in your wallet.
The video game will keep track of the address where the wallet total is stored. If you found eight more coins, the game will fetch the number currently in your wallet from that address, load it into the register, load a biary number corresponding to 8 in base 10 into the other register, add them together and put the results back into the address with the wallet total. We will look at this in more detail in the next section.
Computers also store huge volumes of data on hard disk for decades at a time even without power. A hard disk is covered in tiny magnetic domains which can point up or down corresponding to 1 or 0. As the disk spins, a sensor head can either passively read off the orientation of the magnets as bits, or actively flip the orientation of the magnetic domains to write data onto the disk.
Each of the bytes also has its own separate address. A laptop or personal computer can store hundreds of gigabytes on a hard disk. there are other technologies like flash memory, DVDs and so on that I won't go into here.
The fundamental limit on how fast a computer can do operations is the speed of light, which can go about 15 centimeters or half a foot during a typical computer clock cycle so it would take multiple cycles for a computer larger than that to fetch any data. Realistically, transistors switch much slower than that, while a hard disk takes time to spin up and scan to read the data. In terms of types of computer memory: the cache is small, but quick to access; the RAM stores many more bytes of data, but takes longer to read; the hard disk takes the longest to read, but stores a lot of data almost permanently; so, there is a trade-off between how quickly the data can be retrieved and how much can be stored.
From now on in this video, I will explain things as if the computer just has a single pool of memory which it can instantly read In reality things are more complicated, as I've just described [Beethoven's 5th symphony] To summarize: computers have a clock which synchronizes the operations being performed and ensures that there is enough time to finish performing one operation before starting another. Computers process binary numbers in groups of 8 bits called a byte. Registers are used to store numbers during computations and at other times numbers are stored in the cache, in random access memory and on a hard disk.
Each byte in memory has its own unique address which is itself a binary number. [Music] We have a way of implementing binary operations and storing the results, but the computer needs a way to determine which operation to use and when. This is called an instruction.
There are instructions such as: add two numbers, and copy this number. Computers use what's known as the von Neumann architecture, which means that the instructions are stored in memory as binary numbers amongst other data. For example, the instruction number 16 and base 10 or this byte in binary might mean: add together two 8-bit numbers and store the result.
Instruction number 17 might mean the same thing but for 16-bit numbers and so on. When the computer encounters an instruction, it will use a comparison circuit which we looked at in a previous section to switch on the logic circuit for the necessary operation. If the byte is equal to 16, switch on the add circuit.
How does the computer know when a byte is an instruction or not? 16 might be the instruction to add, or it might be the number of coins in a video game. Well, everything has to do with a precise order in which the bytes appear in memory.
First comes the instruction byte. If the instruction is to add, the computer will need the two memory addresses of the things being added and the memory address where the result is to be stored. Remember that memory is like post boxes for a big apartment building with a unique address on each byte.
In the case of modern 64-bit systems addresses are 8 bytes long. Therefore, the entire add instruction has one instruction byte and 24 bytes of addresses; 25 in total. On the other hand, if the instruction is to copy a number, it just needs to be followed by the origin and destination addresses, meaning the whole thing takes 17 bytes in total.
The trick is that each type of instruction always takes up a set number of bytes and - crucially - the next instruction immediately follows on after the previous one. So, when the computer encounters the add instruction not only does it carry it out, but it knows that the next instruction is precisely 25 bytes along. To start with, the computer interprets byte number 0 - the one at the very start of memory - as an instruction and thereafter goes from instruction to instruction.
Conversely, if there has been an error and the next instruction is not where it's supposed to be - tough luck! The computer will probably crash. How does the computer actually carry out an add instruction?
The number from the first address is loaded into one input register and the second address into another input register. The add circuit is enabled and it does the addition. Then, after a sufficient number of clock cycles, the result can be written from the output register to the required address in memory.
What happens if the instruction is to add two 16-bit numbers together? The computer simply goes to the address of number A and instead of getting just one byte, it gets that byte and then one immediately after. The same for number B and the result C.
[Beethoven's 5th symphony] A computer program is a collection of instructions (and addresses) to be done in order. The computer goes from instruction to instruction to make the program run. Let's look at a specific example: part of the game, which is just a program after all, where the player has a wallet with coins in.
This is what the contents of memory might look like. Byte number 0 is the instruction to add. Then come two addresses: the address of the wallet total and the number of coins the player has just found.
The third address is the wallet again. after the addition the wallet total will be overwritten. No matter what, the computer will now interpret byte number 25 in memory as an instruction.
It is another add instruction. The addresses of the wallet and the amount of coins the player has made from selling items are the inputs. The wallet address is the output again.
The next instruction is 25 bytes further along at byte 49. The program keeps going on with millions of other instructions required to make the game function. In this case, the wallet total is being used a lot, so it will probably be held long term in the processor's cache as well as the RAM.
When the player saves the game it will be written to a save file on the computer's hard disk. To summarize: the computer interprets certain bytes in memory as instructions for which operation to carry out. Computer programs are stored in memory.
They are just large sequences of instructions, memory addresses and numbers. The computer goes from instruction to instruction like clockwork, always knowing based on the type of the current instruction how far along the next one is. [Music] So far, we've looked at instructions which work in a linear manner: copy a number, add two together and so on.
The program would always produce the same result and eventually it would just get to the end of memory and stop. We need a few more instructions to make a computer what's known as Turing complete. Being Turing complete means that it can eventually carry out any possible computation: load a web page, render video game and so on.
The only difference between two Turing complete computers is how fast they are at doing those computations. The first type of instruction a computer needs is what's called a jump or sometimes called a go to. Following the instruction byte is a memory address.
As the name suggests, instead of going directly to the next instruction in memory, the program jumps to the specified memory address and carries out the instruction there. This means that the program can now do a loop. In our video game example, we have seen how instructions are used to add numbers of coins to a total kept in a wallet.
These instructions and all the others required to update the game world, render the graphics and so on are followed by a jump instruction back to the start. This way the game can keep running indefinitely. The other crucial type of instruction that a computer needs is a branching or conditional instruction.
If a given condition is met, for example if two numbers are equal, then jump to a particular address. We have already seen how a comparison can be implemented with logic gates. Adding such logic makes a jump conditional, allowing different sets of instructions to be executed based on calculations performed so far, or inputs to the computer such as a keyboard or mouse.
With conditional jumps the computer is Turing complete: given enough time it can do any calculation. At this point it's worth mentioning how a real program might actually be written. If you thought that things were quite confusing so far, you're not wrong.
Remembering which instruction does what, which memory addresses have what data stored in them, where jumps go and so on. This is all hard to keep track of. Programmers typically work with what are referred to as high level programming languages.
This means that all the nuts and bolts of instructions and memory addresses are hidden away and instead life is made easier for the programmer. For example this is a bit of code in the C or C++ language. This code is readable by humans.
If A is greater than B, set Q to be A; otherwise, when B is greater or equal to A, set Q to be B. In other words, the number Q will end up with whichever number is larger out of A or B. This is easy to understand for the programmer and anyone who tries to read it.
A program called a compiler will take this code and arrange it into the correct set of instructions, sort out where the variables are stored in memory, where to jump, do we need to use 16-bit or 32-bit numbers and so on. This is what's referred to as compiling the code. There are many aspects of programming languages which simplify the job of the programmer, but one of the most important is the idea of a function or method.
In mathematics, the square root is a function which takes a number and returns another. Certainly there are implementations of the square root function in computer code, but there are also more arbitrary functions and methods such as for fetching emails and so on. Say that a program needs to evaluate the square root of different numbers repeatedly.
The instructions which comprise the square root function need only be stored in memory once. Whenever the function needs to be used, or called, the computer will jump to the function's location in memory, carry out the instructions and jump back with the result to the place in the program where it left off. Functions and methods can be compiled into libraries so that programmers across time and space can share useful code that they've written.
If you've ever seen a DLL file on Windows, that's what that is. Once a function or method has been written efficiently, other programmers can use it without needing to spend much time on it. [Beethoven's 5th symphony] To summarize: computers have an instruction to jump to a memory address and carry out whatever instruction is there.
This allows loops within programs. Some jumps only happen when a condition is met any computer that is capable of such jumps can carry out any computation as long as it has enough memory and time to do so. [Music] Chances are, you're watching this video on some sort of display.
There are many technologies past and present which have been used for computer monitors, so I will speak in generalities. Light bulbs like at the start of the video are too large, but light emitting diodes or LEDs are smaller and more efficient. The higher the voltage across an LED, the brighter it is.
A display is capable of electronically changing the voltage across each of its LEDs and therefore their brightness according to data from the computer. Since computers handle numbers in bytes, in other words groups of 8 bits, it makes sense to have the brightness go up on a scale from 0 to 255 - the largest value a byte can take. The human eye has cells sensitive to red green and blue light.
By using a set of three LEDs: red, green and blue, varying the relative brightness of each one, the human eye can be tricked into perceiving the full spectrum of light. A mix of red and green gives yellows and browns. Red and blue gives pinks.
All three colors at full brightness give white and at partial brightness give grey and so on. A little spot composed of three colors like this, no matter what technology is used, is called a pixel. A display has pixels arranged in rows and columns to form a 2-dimensional grid.
For example, this video at maximum resolution is 1920 pixels in width by 1080 pixels in height. The number of bytes it takes to fill a display in memory is 3 times the width times the height. In simplest terms, the computer must update this many bytes in memory 30 or 60 times per second and send the result to the display.
So, for example, if you full screen this video in its maximum resolution, each frame would take up this many bytes in memory. While displaying graphics and images may seem like a purely artistic undertaking, and in some sense it is, in actual fact for a computer it is nothing more than gate logic and instructions we have seen so far. Each pixel has a horizontal or x coordinate and a vertical or y coordinate which corresponds to its position in memory.
For example, let's say your display is exactly a thousand pixels wide and you start with the top left pixel, if you move forward three thousand bytes in memory or the equivalent of a thousand pixels, you'll go off the end of the top row and onto the left pixel of the second row. There are many algorithms or long lists of instructions to draw windows, buttons, text and computer graphics all by using this kind of arithmetic and manipulating bytes in memory. To display a simple desktop environment, a loop fills every pixel with the background color by repeatedly copying the sequence of three bytes that uniquely defines the color.
The taskbar is drawn by looping over only rows of pixels with specific y coordinates. A start button is created by filling a rectangle with a given range of x and y coordinates with another color. Text is added on top by copying pre-determined patterns of pixels for each letter or through some other algorithm.
To display a window, it's necessary to know its specific width and height and the current position of its top left pixel. Adding the x value of the left pixel and the width by simple arithmetic gives the rightmost edge pixel. A title bar can then be drawn, along with a border text and so on.
If the user moves or resizes the window, all of this arithmetic has to be redone and the window redrawn. The simplest computer graphic involves drawing a line between two points both with their own set of coordinates (x,y). The bresenham algorithm involves a loop which draws just such a line by following a straight x-y curve familiar from school mathematics.
Drw three lines and you have a triangle which can be filled with textures, illuminated by simulated light sources and combined with thousands of others to make a 3D shape. No matter how complicated it all comes down to addition, multiplication and jump instructions compiled into functions. Sound is the other major form of output from a Computer.
Sounds are waves of high and low pressure moving through the air. Speakers and headphones produce sounds by moving a membrane back and forth. There is a permanent magnet attached to the membrane inside a fixed conducting coil.
An oscillating current through the coil pulls the magnet in and out making the membrane oscillate and creating a sound. The computer breaks up time into short steps and specifies the current at every step in time by sending a binary number to a sound card. Electronics turn the digital binary values into the appropriate current.
A microphone works in exactly the opposite way. There are many input devices out there, but the way a computer deals with inputs is usually the same. The inputs leave so-called messages in memory, which are groups of bytes of preset length, identifying the device and what exactly the input consists of.
For example, a message queue may have it three messages of eight bytes each: when decoded, the first one states that the a key has been pressed down; the second states that the mouse has moved 30 pixels to the right; and the third that the shift key has been released on a keyboard. The keys are arranged into a grid. When a key is pressed, it makes an electrical contact between a horizontal and vertical row, allowing the precise key to be identified.
Between the keyboard and the computer's motherboard, the message announcing when a key is pressed down or released is relayed to computer memory, allowing the computer to act on it. In our video game example, imagine that the 'B' button is used to buy things and the 'S' button is used to sell. In Windows, these key presses have corresponding decimal numbers 66 and 83 respectively.
In terms of instructions, the game would have conditional jumps based on the value of the last keystroke. If the keystroke value is 66, then a computer would jump to a set of instructions corresponding to buy; if 83, it would jump to instructions corresponding to sell. A computer mouse tracks how far left or right and up or down it has moved along a flat surface and also which buttons have been pressed.
A program, typically the computer's operating system, keeps track of the x and y pixel coordinates of the mouse by adding or subtracting any changes in position received from the mouse. Say that a window with a button is currently active when a user presses down the left mouse button the program will check the x coordinate of the mouse cursor against the dimensions of the button. If x is greater or equal to the left side of the button and less than or equal to the right side of the button then the click is horizontally inside the button the programs then check the y-coordinate is vertically inside the button too.
If both are true the button is considered pressed. If at any moment you have 10 buttons on screen, the program will check every button in turn if it's been pressed down or not. [Beethoven's 5th symphony] To summarize: a computer display is a grid of pixels.
Each one a group of a red, a green and a blue light. The brightness of each light is set by a corresponding byte in computer memory. A pixel therefore has an arbitrary color and a group of them together makes an image.
Sound is recorded by measuring the motion of sound waves and saving them as binary numbers in memory. The reverse process is used to play back the sounds. External devices send messages to the computer, which are placed into memory allowing the computer to identify inputs, such as when a key has been pressed or the mouse has been moved, and then act upon them.
Finally, I want to take stock of the entire video. I think it's amazing how doping silicon with impurity elements and performing relatively simple logical operations allow us to watch and create videos process information and even inhabit virtual worlds. Computing is all about building up simpler concepts to achieve more complicated ones.
If you've got CMOS pairs, you can make a NAND gate; if you've got NAND gates, you can make any logic gate; if you have logic gates, you can make instructions; if you have instructions, you can make algorithms into functions; from functions you can build up programs as complicated as you like. This also means that computers are very dumb: they have no intuition and must rigidly follow programs which have every possible eventuality spelled out in great detail.