Term
|
Definition
|
|
Term
|
Definition
• 1642: Blaise Pascal – the Pascaline mechanical calculator o Basic design used up through the early 20th century |
|
|
Term
|
Definition
• 1801: Joseph-Marie Jacquard: programmable weaving loom using punch cards |
|
|
Term
|
Definition
• 1822: Charles Babbage – the Difference Engine o 1833: Analytical Engine – never built Mill: Arithmetic processing unit; Store: Memory; input and output |
|
|
Term
|
Definition
Punch cards can easily be read (and, for that matter, created) by wheeled mechanisms. That’s why they lasted so long. |
|
|
Term
|
Definition
Personality: Herman Hollerith o Tabulator Patented in 1889 as his doctoral thesis Used in 1890 for the census; results tabulated in only one year 1880 census had taken eight years to tabulate! o Founded the Tabulating Machine Company in 1896 o Went on to invent the automatic card feeder and the keyboard-based card punch o In 1911, TMC and three other corporations merged into Computing Tabulating Reporting Corporation (CTRC) o In 1924, renamed the International Business Machines Corporation |
|
|
Term
|
Definition
Footnote 1: 1928 brought the 80-column punch card format. |
|
|
Term
|
Definition
Footnote 2: Konrad Zuse, in Germany, in the 1930, built the Z1, Z2 and Z3 using electromechanical relays. Machine was programmable with memory, arithmetic unit, control unit. The government ignored it and it was destroyed by Allied bombs. |
|
|
Term
|
Definition
First Generation: Vacuum tubes Vacuum tubes invented by Edison in 1883 (sidebar: p19) First generation of electronic computers John Atanasoff, Atanasoff/Berry Computer : Fully electronic, but built specifically to solve linear equations John Mauchly, J. Presper Eckert: ENIAC: first all-electronic general purpose computer o 17,468 vacuum tubes o 1,800 square feet o 30 tons o 174 kW o 1000 bit capacity o Not actually finished before the end of WWII |
|
|
Term
|
Definition
Second Generation: Transistors Transistor invented in 1948 at Bell Labs (sidebar: p22) IBM (7094, 1401), DEC (PDP-1), Unisys (1100) dominated this generation Control Data Corporation: Cray’s CDC6600, first supercomputer o 10 MIPS o 60-bit words o 128 kilowords (around a megabyte) |
|
|
Term
|
Definition
Third generation: Integrated Circuits (Microchips) IBM System/360 DEC PDP-8 and PDP-11 – time-sharing computers Cray Research Corporation – Cray-1, 1976 o 150 MIPS o 8 megabytes memory |
|
|
Term
|
Definition
Fourth generation: VLSI o More than 10,000 components per chip o 1997: ENIAC-on-a-chip was 10% of the normal number of transistors of a late-90s microprocessor o 1971: Intel 4004: 4-bit, 108 KHz o Microcomputers starting in 1975 Altair 8800 Apple I Apple II Commodore PET Commodore VIC-20 o 1981: The IBM PC Project deadline of one year! Open architecture |
|
|
Term
|
Definition
Moore’s Law: Gordon Moore o 1965: “The density of transistors in an integrated circuit will double every year.” o Revised: “The density of silicon chips doubles every 18 months.” o Still holds true even now. |
|
|
Term
|
Definition
John Mauchly, J. Presper Eckert: ENIAC: first all-electronic general purpose computer o 17,468 vacuum tubes o 1,800 square feet o 30 tons o 174 kW o 1000 bit capacity o Not actually finished before the end of WWII |
|
|
Term
|
Definition
John Atanasoff, Atanasoff/Berry Computer : Fully electronic, but built specifically to solve linear equations |
|
|
Term
|
Definition
Control Data Corporation: Cray’s CDC6600, first supercomputer o 10 MIPS o 60-bit words o 128 kilowords (around a megabyte) |
|
|
Term
|
Definition
Cray Research Corporation – Cray-1, 1976 o 150 MIPS o 8 megabytes memory |
|
|
Term
Microcomputers starting in 1975 |
|
Definition
o Microcomputers starting in 1975 Altair 8800 Apple I Apple II Commodore PET Commodore VIC-20 |
|
|
Term
Principle of Hardware/Software Equivalence |
|
Definition
Principle of Hardware/Software Equivalence • Anything that can be done in hardware can be done in software, and vice versa. o Corollary: Hardware is usually faster. o Corollary: Software is usually more convenient. |
|
|
Term
Components of a Modern Computer |
|
Definition
Components of a Modern Computer • Note the difference between metric and “metric” prefixes. “Kilo” refers to both 103 and 210, but these are not the same number. Likewise “Mega” for 106 and 220, “Giga” for 109 and 230, etc. • The CPU: The clock cycle speed of modern CPUs is measured in gigahertz (GHz) – billions of cycles per second. • Memory: The capacity of memory is measured in gigabytes. The speed of the memory bus is measured in megahertz (MHz). o Speaking of busses, there are several: the memory bus and the I/O bus are the most important. A bus is anything that transfers information among parts of the computer. • Ports, like USB ports, are exterior connections that hook external components up to a bus. |
|
|
Term
|
Definition
• A processor, memory, input, and output. |
|
|
Term
|
Definition
The clock cycle speed of modern CPUs is measured in gigahertz (GHz) – billions of cycles per second. |
|
|
Term
|
Definition
• Memory: The capacity of memory is measured in gigabytes. The speed of the memory bus is measured in megahertz (MHz). o Speaking of busses, there are several: the memory bus and the I/O bus are the most important. A bus is anything that transfers information among parts of the computer. |
|
|
Term
|
Definition
• Ports, like USB ports, are exterior connections that hook external components up to a bus. |
|
|
Term
|
Definition
Standards Bodies • There’s a reason all of this stuff (more or less) works together, and often it’s because a standards body has published a way for it to. Here are some of the important international ones… o IEEE: Institute of Electrical and Electronics Engineers o ISO: International Organization for Standardization o ITU-T: International Telecommunications Union: Telecommunication Standardization Sector (formerly CCITT, the Comité Consultatif International Téléphonique et Télégraphique) • …and some of the important national ones: o ANSI: American National Standards Institute o BSI: British Standards Institution o CEN: Comité Européen de Normalisation (European Committee for Standardization) |
|
|
Term
|
Definition
• User: Executable Programs |
|
|
Term
|
Definition
• High Level Languages: C++, Java, etc. |
|
|
Term
|
Definition
|
|
Term
|
Definition
• System Software: OS, Libraries |
|
|
Term
|
Definition
•Machine: Instruction Set Architecture |
|
|
Term
|
Definition
• Control: Microcode (or more hardware) |
|
|
Term
|
Definition
• Digital Logic: The metal (actual circuitry) |
|
|
Term
Stored Programs: The Von Neumann Model |
|
Definition
• (Actually, Mauchly and Eckert – remember them? – came up with it first. But the Department of Defense wouldn’t let them publish it. However, another fellow knew about it, and didn’t work for the DoD…) • John von Neumann |
|
|
Term
Components of the Von Neumann Computer |
|
Definition
o The CPU Arithmetic/Logic Unit (ALU) Registers – very small memory areas on which the ALU operates The Program Counter (we’ll get to this later) o Memory o Input and Output o A single logical path between the CPU and the memory |
|
|
Term
|
Definition
Arithmetic/Logic Unit (ALU) Registers – very small memory areas on which the ALU operates The Program Counter (we’ll get to this later) |
|
|
Term
Positional Numbering Systems |
|
Definition
• A slightly less familiar one: let’s look at base 3 numbers. |
|
|
Term
Leibniz and Binary Numbers |
|
Definition
• Gottfried Leibniz was the first fellow to generalize positional systems. • He was particularly interested in binary numbers – that is, base-2 numbers. He ascribed religious significance to them. • They definitely map to the most basic concept in mathematics: truth and falsity. • They also map to a high and low electrical signal, which made them immediately useful in computers. To this day, all serious computers exclusively do their actual calculation in binary numbers. |
|
|
Term
|
Definition
• A single binary numeral, or binary digit, is known as a bit. • Beginning in 1964, IBM’s System/360 referred to a group of eight bits as a byte. That caught on, and still holds. o If you want to be more precise, eight bits can also be called an octet. The book uses octet to refer to a group of three bits. This is not a common use. o Four bits is also called a nibble. No one actually uses this term anymore. • Words are the units of bits that a computer manipulates together. They are usually sized as powers of 2. |
|
|
Term
|
Definition
Addition and subtraction work more or less like they do in decimal, but how do we get there? |
|
|
Term
|
Definition
• Hexadecimal (base 16) and octal (base 8) systems are used solely because of how easy it is to convert them to and from binary. See page 46. |
|
|
Term
Converting Integers to Binary |
|
Definition
• Repeated subtraction is the most intuitive method: 1. Set all places 0. 2. Determine the highest power 2 less than or equal to the number. 3. Set its place to 1, and subtract it from the number. 4. If you get zero, stop. Otherwise, go back to step 2. • This works, but it’s slow and it requires that you know all the powers of 2. Division-remainder is faster, but less intuitive. 1. Divide the number by the destination base (i.e., 2). Keep the remainder (even if it’s 0). 2. If the quotient is 0, go to step 3. Otherwise, go back to step 1, using the quotient as the new number. 3. Write down all the remainders, right to left. |
|
|
Term
Converting Fractions to Binary |
|
Definition
• Instead of repeatedly dividing, repeatedly multiply. 1. Multiply the number by the destination base (i.e., 2). Keep the digit to the left of the decimal point (even if it’s 0). 2. If the fraction to the right of the decimal point is 0 or you’re at your desired precision, stop. Otherwise, go back to step 1, using the fraction to the right of the decimal point as the new number. 3. Write down a binary radix point, and to the right of it, write all the digits you kept, left to right. |
|
|
Term
|
Definition
Consider scientific notation, e.g.: 3.2767×104. Calling it intuitive is a stretch, but it’s a reasonably readable way to represent numbers of arbitrarily large or small magnitude. We represent floating point numbers – numbers of arbitrarily large or small magnitude, where the decimal point can be anywhere – the same way in computers. o Each floating point number has 3 components: o The sign – 1 for negative and 0 for positive. (Yes, they’re sign-magnitude. Floating-point calculations are so much more complicated that we might as well just use the more intuitive method.) o The exponent – the power of 2 by which the next component will be multiplied. Older sources will call this the characteristic. The number of bits for the exponent varies by representation (but is always constant within a representation). o The significand – a value less than zero, represented by binary digits as written to the right of an imaginary binary point. Older sources will call this the mantissa. The number of bits for the significand varies by representation (but is always constant within a representation). o We represent negative exponents by biasing the exponent – i.e., just adding a value to it. We typically choose this to be the value in the middle of the integers that can be represented at the exponent’s bit width; e.g., with a 5-bit exponent, we would choose 16. o We avoid multiple representations of the same number by normalizing the significand: we require that its first digit be 1. o In fact, sometimes we just pretend the 1 is there and grab an extra bit of significance that way. |
|
|
Term
|
Definition
You should already know a bit about precision in calculation. If you don’t (or, for that matter, if you do), take a look at pp67-68 and 71-72. |
|
|
Term
IEEE Floating Point Numbers |
|
Definition
o IEEE-754 floating point numbers are used in basically every current computer. o There are some legacy systems that need to support other formats, but we won’t worry about them right now. o They have a few quirks: o They do have an implicit leading 1 – to the left of the binary radix point. o They have a few special values: A number with all bits set to 0 represents zero. A number with all exponent bits set to 1 and all significand bits set to 0 represents infinity – positive or negative, depending on the sign bit. A number with all exponent bits set to 1 and anything other than zeroes in the significand represents a value that is not a real number. o Single-precision IEEE numbers have 1 sign bit, 8 exponent bits and 23 bits in the significand. o For double-precision numbers, that’s 1, 11 and 52. There are more precise numbers, but no standard ones; look up all the things a “long double” can be if you’re curious. |
|
|
Term
|
Definition
• England – first half ot the 1800s • Taught himself Greek, Latin, French, German – and math • Son of a tradesman – could not be appointed to a prestigious university • Professor of Mathematics at Queen’s College in Cork, Ireland • 1854: The Laws of Thought |
|
|
Term
|
Definition
• Chose binary for the ABC • Did not realize he was drawing from Boole |
|
|
Term
|
Definition
• An algebra for the manipulation of variables with exactly two states • Historically and practically, those states are almost always truth and falsity • Maps the straightforward way to binary digits |
|
|
Term
The Fundamental Operators |
|
Definition
• There are several variants of notation for each of these; we will use the ones from the book. • There are three operators that matter: AND, OR and NOT. There are some others that exist for convenience – we’ll get to them later. • NOT has precedence over AND, which has precedence over OR. • An important note on writing NOTs: make sure to write out () rather than () . The book does the latter, which makes it very hard to distinguish from ̅ . The last bit on page 113 won’t make sense until you realize they’re doing that. |
|
|
Term
|
Definition
• We can simplify Boolean equations the same way we simplify more recognizable algebraic ones. Here are some useful laws. • Make sure you get DeMorgan’s law right – look at the bottom of page 113 for a discussion about the wrong version. • We use these identities to simplify equations more or less the same way we simplify ordinary algebraic equations. • Examples 3.1 and 3.2 on page 114 illustrate straightforward versions of this. Example 3.3 illustrates an extremely hard version of this; you will not be asked similarly difficult questions. • We can also use these laws as one of two ways to prove identities. The other good way is using a truth table. |
|
|
Term
|
Definition
• DeMorgan’s law works just fine for more than two variables – and for more than one operator. • To obtain the complement of any Boolean expression: o Exchange every individual variable for its complement (i.e., NOT that variable). o Swap every AND for an OR and every OR for an AND. (Don’t allow changes in the order of evaluation here – you’ll need to put parentheses around things that were ANDs and are now ORs.) • The complement evaluates in exactly the opposite way of the original expression. • All this is useful because sometimes it’s easier to implement the complement, and we can use the results as we would the original by just wrapping a NOT around the whole thing. |
|
|
Term
Canonical Representations |
|
Definition
• There are infinitely many ways to represent any Boolean expression. • Two standard ways to do it are: o Sum-of-products, in which the expression must be AND’d variables OR’d together. This looks like F(, , ) = + ̅+ . o Product-of-sums, in which the expression must be OR’d variables AND’d together. This looks like F(, , ) = ( + )( + ̅)( + ̅)( + ). • Since sum-of-products is almost always shorter and easier to work with, we usually use it. • No matter what we use, we’re always going to eventually want to simplify it to the expression that has the minimal number of terms. o Why? See next class. |
|
|
Term
|
Definition
• Logic Gates are the electronic components that implement Boolean logic. Given signal inputs, they produce outputs corresponding to certain Boolean operations. • The three basic gates correspond to the Boolean operations we’ve already looked at. • There are three more gates we’re concerned with corresponding to three other operations. • NAND and NOR gates are useful because either of them can be used to build any other gate. o () = ̅, so we can build NOT gates with NAND gates. o () = , so we can build AND gates with NOT and NAND gates. o (̅ ) = + , so we can build OR gates with NOT and AND gates. o The demonstration for NOR is similar – try it yourself! • We sometimes use notation showing more than two inputs into a gate. • We also sometimes show the complement coming out of a gate along with the real value. • Overall, using only NAND/NOR gates, we can build a digital circuit for any Boolean expression. • Simplification is specifically important because it directly affects how many gates we need. • Circuit designers don’t actually simplify manually with identities, they use Karnaugh maps. o You don’t need to understand those for this course. |
|
|
Term
|
Definition
• Integrated circuits contain components (transistors, capacitors…) that make up gates. • We build circuits the same way we build programs: abstraction. • Build small circuits, bigger circuits out of the small circuits, even bigger ones out of those, etc. |
|
|
Term
|
Definition
• In a combinational circuit, the output is dependent only on the input. • Examples in the textbook: o P124: Half-adder o P125: Full-adder, ripple-carry adder o PP127-128: Multiplexer o PP128-130: Shifter • Combinational circuits are sufficient to build the ALU of a Von Neumann machine! • But they can’t handle the registers, let alone other memory. |
|
|
Term
|
Definition
• In a sequential circuit, the output is dependent on the input and the previous input. o (Actually, it usually needs to be dependent on the current output. But it’s fairly easy to get the current output into the input.) • That implicitly means that there needs to be a way to order input – a way to tell the difference between “current” and “previous”. • This in turn leads to the concept of the clock, and the clock cycle time. o Remember that from when we talked about CPUs? o A modern CPU is a huge sequential circuit that updates its state billions of times per second. • The fundamental sequential circuit is the flip-flop. o Specifically, computer memory is basically a huge collection of flip-flop. o Set/Reset (SR) Flip-Flops can store a single bit, and have inputs to set or clear it. o Jack Kilby (JK) Flip-Flops can store a single bit, have inputs to set or clear it, and can deal with both of those inputs being true, which an SR flip-flop can’t. o Data (D) Flip-Flops can store a single bit, and need to have that bit constantly refreshed. D flip-flops are a lot like how physical memory actually works. o The characteristic tables (truth tables for sequential circuits) for the three basic flip-flops are below. Qt is current output, Qt+1 is what the next output will be, and everything else is an input. |
|
|
Term
|
Definition
• The fundamental sequential circuit is the flip-flop. o Specifically, computer memory is basically a huge collection of flip-flop. o Set/Reset (SR) Flip-Flops can store a single bit, and have inputs to set or clear it. o Jack Kilby (JK) Flip-Flops can store a single bit, have inputs to set or clear it, and can deal with both of those inputs being true, which an SR flip-flop can’t. o Data (D) Flip-Flops can store a single bit, and need to have that bit constantly refreshed. D flip-flops are a lot like how physical memory actually works. o The characteristic tables (truth tables for sequential circuits) for the three basic flip-flops are below. Qt is current output, Qt+1 is what the next output will be, and everything else is an input. |
|
|
Term
The CPU (recall von Neumann!) |
|
Definition
• Comprised of two sets of components o The data path Registers (made of flip-flops; can be “scratchpads”, indices, stacks, status indicators, general purpose…) ALUs – usually 2 registers in, 1 register out Internal Bus(ses) Clock(s) o The control unit Sequencing Data traffic control (fetching, decoding, register selection, interrupt handling, the program counter, power control…) |
|
|
Term
|
Definition
o The data path Registers (made of flip-flops; can be “scratchpads”, indices, stacks, status indicators, general purpose…) ALUs – usually 2 registers in, 1 register out Internal Bus(ses) Clock(s) |
|
|
Term
|
Definition
o The control unit Sequencing Data traffic control (fetching, decoding, register selection, interrupt handling, the program counter, power control…) |
|
|
Term
|
Definition
• Sets of wires used as a shared path for data o Can be point-to-point, or common (also known as multipoint) o Four types of wires, or lines – data, control, address and power o Use of the lines governed by a protocol • Types of busses o Old days: processor-memory, I/O, backplane o PCs: system, expansion, local • Busses with more than one device that can initiate a transaction require arbitration. There are four basic common types: o Daisy-chain: One control line is set true down all devices, from one to the next, in priority order; the first device that needs it sets it false for everyone else farther down. Allows starvation. o Centralized parallel: n control lines for n devices, with a centralized controller that handles all their requests to start a transaction. That controller is a bottleneck. o Distributed self-selection: Devices decide among themselves who gets the bus. o Distributed with collision detection: Anyone can start a transaction any time the bus is free; when two fail due to collision, both wait a random amount of time before trying again. |
|
|
Term
Clock Cycles and Performance |
|
Definition
• The CPU requires a fixed number of ticks to execute each given instruction. • Most machines are synchronous. • The more ticks, the faster, however… o …we can’t tick faster than the circuit can stabilize. o If we try, we get undefined behavior. Notional equation: CPU time =
=
×
×
• Architecture is critical to cycles. 80286 took 20 cycles to multiply. Pentium took 1. • Bus clocks are usually slower than CPU clocks; this means that memory access, in particular, is a bottleneck. • Overclocking: Performance improvements at risk. o CPUs, memory, and even busses can often be clocked beyond their specifications, but… …it makes them run hotter, and …eventually, you’ll reach the point where the circuits don’t stabilize in time. |
|
|
Term
|
Definition
• Everything that’s not the computer and connects to it. • Two fundamental types of I/O: o Memory-mapped (uses memory) versus… o Instruction-based (uses specific instructions, slower but simpler) |
|
|
Term
|
Definition
• A collection of registers (!?) • Usually byte-addressable o That doesn’t mean it doesn’t use words • Even on a byte-addressable machine, you typically manipulate words o Memory notation example: LxW (e.g. 4M x 16 bits = 222 x 21 bytes) Bus memory addressing is always by word So addresses need a number of bits for bytes, but only a number of lines for words You can connect chips “lengthwise” to increase memory o This means that memory operations on bytes that aren’t a multiple of the memory word size cause alignment issues • Up at the module level, you can connect modules to increase memory further o Low order interleaving: flow across modules, low bits determine which module o High order interleaving: flow down modules, high bits determine which module |
|
|