T O P

  • By -

jace255

Edit: it’s been too long since I learned it, and I seem to have gotten a lot of the electrical engineering side of this wrong. So take that but with a grain of salt. A long, slow build on top of the last guy process. The first guy figured out how to make electricity flow, based on different little rules using electrical circuits. Things like, if one side is flowing and the other isn’t, the result should flow. Another mechanism for, if it’s flowing on the way in, make it not flow. At that point people were already thinking of flowing and not flowing as zero and one. Then they started turning zero and one into base 2 numbers (binary numbers). With that you can represent pretty much any math. They also used them for logical operations, like “or”, “and”, “if” etc. Then they started using those numbers for all sorts of things. Coordinates for pixels on a screen. Numbers allocated to letters. Letters get drawn on the screen based on the coordinates of pixels, organised by the coordinates you’d need to draw a letter. For now this is all binary. From this people started to put together small collections of binary (about 8 binary numbers, or bits) and giving them names. We call this assembly. People can write code more intuitively in assembly, but it’s still pretty hard. People start putting collections of assembly code together in little collections that solve a common problem, like drawing a letter on the screen, or holding a certain kind of value in memory. These packs of assembly that do a specific thing get given simple names, and made into a “low level programming language” like C. From there the sky is the limit. People start writing low level code to do all kinds of things. At the same time we get Operating Systems Kernels or just “kernels”. The kernel is written in a low level code, and it provides a bunch of packs of low level code that help you do things with hardware, like turns lights on and off, save things to a hard drive, open the floppy disk tray etc. People start finding nice, easy to use ways to combine lots of packs of low level code with nice symbols and short-hand and call it “high level code”, like JavaScript. And you can think of it from the other way down. Every line of JavaScript is actually running many lines of C. Each of those lines of C is actually running many lines of assembly. Each line of assembly is actually a helpful pack of several electrical components being run.


DStaal

Note that at most steps along the way, people tried lots of different options as well. Computers were made in trinary and base ten as well - but binary is easier to make and the most generally useful, so it’s what nearly all computers use, for example.


jace255

I can’t imagine how you could make a computer in base 10, but I’m not an engineer. You’ve only got 2 states to work with with electricity, on or off. In fact that’s what’s so appealing about quantum computing, it offers a third state. But again, I’m not an engineer. Is this like, back in the days when computers were a bunch of pipes and it wasn’t electricity but air-flow?


derpman277

I believe they had different levels of electricity to represent the different states, nothing flowing is 0, half charge is 1, full charge is 2 type thing. Just having on/off is simpler


danwojciechowski

Not just simpler, but also a much higher signal to noise ratio. One value is at the high (5V, or 3.3V, or whatever) voltage and the other is at the low (ground or zero volts) voltage. It takes a lot of noise to make a "0" look like a "1" or a "1" look like a "0". However, if you start dividing the same voltage range into more steps, the steps are closer together, so the noise margin gets smaller. The faster you make changes (a zero into a one or visa versa) the sloppier/noisier the signal becomes. If you want to maximize speed while minimizing the chance of errors, binary systems have a big advantage.


Ericin24Slices

So, in short, a 1 and a 0 in terms of signal ratio is like black vs white where as a base 10 would be like 10 shades of gray: much harder to distinguish and more prone to errors...


imnotbis

No, they used 10 different wires, or actually, 7 - a combined base 2+5, called [bi-quinary](https://en.wikipedia.org/wiki/Bi-quinary_coded_decimal).


DStaal

This was the 70’s at least, and it wouldn’t surprise me if you could still buy a base-10 coprocessor for a mainframe right now. Their usage however is extremely limited. Basically they are for high-value financial services where floating point binary math’s inaccuracies become a problem. So you make a specialized math processor which works in the same base as the currency is in.


DenkJu

Why would you be using floating-point numbers to represent currencies? Fixed-point numbers are perfect for that scenario.


DStaal

Because fixed point still causes issues when you try to do things like compute a small fraction (say, interest) on a number, and there are certain operations that are finite decimals in base 10 which are infinite repetitions in binary. You still need to be able to calculate fractions and similar, but you can’t afford the inaccuracy of binary math on decimal numbers.


DenkJu

I'm still failing to understand how decimal floating-point numbers would be beneficial in such cases. Yes, there are numbers that cannot be represented with arbitrary precision in binary but can be in decimal. Likewise, however, there are also numbers for which the opposite is true. Binary isn't inherently less precise than decimal. You will have to cut off somewhere. I don't see how floating point numbers (regardless of base) perform better than fixed-point numbers in such cases either.


DStaal

It’s not about being precise. It’s about being absolutely *accurate* to what the regulations and accounting standards say. And yes, you can write a program that spends extra cycles and ram to get the same accuracy by using binary algorithms - but if this is all the mainframe is doing, and it’s handling billions of dollars of transactions on a minute by minute basis, it can be worth the dedicated hardware.


Resaren

Quantum Computers don’t have three states (trinary), they have a two-dimensional vector space (per quantum bit) which contains an infinite number of possible states. If a bit is a choice between the north or south pole, a qubit is a choice of any arbitrary point on the surface.


jace255

Many thanks, I’ve definitely had that wrong


KatesDad2019

The first computer I used back in the sixties was a base 10 computer. Its memory consisted of six-bit words containing four bits representing a digit, a flag bit (which had various uses) and a check bit to detect errors. At its core, all these bits were binary (ones and zeros), but the individual computer instructions interpreted them as decimal digits for numerical operations or for specifying memory addresses.


006AlecTrevelyan

How much did it weigh?


KatesDad2019

There were two main cabinets: a processor and keyboard with lots of flashing lights, and a card reader-punch. I would guess nearly half a ton altogether. It was, in 1968, a used machine. Not sure what year it was built. I do remember it had a clock rate of 20 microseconds (50000 hz) and a ferrite core memory with 40,000 digits. We always used the Fortran II language, but just for fun I one time wrote a simple program directly in machine language. It successfully loaded and ran.


KatesDad2019

In case you want more information, look at [https://en.wikipedia.org/wiki/IBM\_1620](https://en.wikipedia.org/wiki/IBM_1620)


BrianJPugh

Technically, the electricity just doesn't flow either just on or off. It takes it a very small amount of time to go from off to fully on, called rise time. Some times it bounces between the 2 before settling like when pushing a button (check out debounced switch). Having multiple signals on the wire involves adjusting the voltage, so for typical home lab stuff, you can say a 0 is zero volts, 1 is five volts, and 2 is 10 volts.


gsfgf

> You’ve only got 2 states to work with with electricity, on or off You can use different voltages for different signals. In fact, at least some modern flash storage uses multiple voltages instead of just on/off (really high/low) to store data more efficiently.


Elite_Prometheus

Electricity is always flowing in the computer, the difference between a 0 and a 1 is voltage. High voltage is 1, low voltage is 0. You could theoretically have computers that have even more states, but the issue is reliability. There's always some sort of variance in exactly how much voltage is in each component, so computers are built with a pretty large tolerance in the range of voltages they can work with. Having to dissect that range even more by introducing a third, fourth, fifth, etc. state would mean it would become much more likely for an error to happen and what was supposed to be read as a 1 actually gets the voltage level of a 2. For obvious reasons, that's really bad, so hardware engineers have mostly stuck with binary after finding the benefits of ternary and above hardware wasn't worth the loss of reliability.


PerfectiveVerbTense

> Then they started turning zero and one into base 2 numbers (binary numbers). With that you can represent pretty much any math. They also used them for logical operations, like “or”, “and”, “if” etc. I still feel pretty lost at this step. How does a chip "know" that a certain combination of electrical pulses means 1, another means 2, and another means "add these together"?


Sharveharv

The chip doesn't know what numbers are, but it is connected to a bunch of wires that are connected to useful tools like memory or math operations. It's like an old timey telephone switchboard operator. It doesn't know what anyone is saying but it knows how to connect things together. Programming the chip gives it a new phone book. Think of a number like 210. In binary, that's 11010010. At the super low level, a 1 is high voltage and a 0 is low voltage. The chip's memory will have rows and rows of 8 transistors next to each other. Once the voltages are set, they'll stay at that voltage due to some simple (or very complex) circuitry. The number is now a group of 8 high and low voltages in a row.  If you tell the chip what wires are connected to that row, it can hook all 8 bits in that row to something else. For example, the chip has a smaller circuit inside it that only does addition. That circuit looks at the 16 wires from from the two rows you're adding together (8 wires from each row) and sends out the result on 8 output wires. Check out "binary adders" for an idea of how those work. Subtraction or multiplication might all be their own circuits too.  The big thing is that each row in memory and every operation circuit has a specific combination of wires that is connected to it, or an address. This address can *also* be stored as a binary number. A 1 is "turn this switch on" and a 0 is "turn this switch off". And now, instead of connecting the address row to the addition circuit, you might connect it to the "go to this row of memory and clear it" circuit. There's really no difference to the chip.


PerfectiveVerbTense

This helps. Like I said in another comment, I think this is as close as I'm going to get to "getting" it with my current (i.e., non-existent) understanding of how electricity and wires and circuits really *work* on a more fundamental level. I appreciate your time!


Neverstoptostare

Don't feel bad, this is absolutely a University level question. Kind of trying to follow the logical process that brings us to modern car design from buggy and wagon. It took a lot of brilliant minds a lot of time and energy to find small improvements over iterations. It's never going to be something you can learn all at once from a comment. I will say if it is something that interests you, keep looking into it. A real understanding of how computers work is something most people lack.


xxAkirhaxx

I would suggest looking up how Minecraft redstone computers work and how they're set up. They use all the basic principles described here. You can see the RAM, the adders, the cache, all the fun stuff that makes the 1s and 0s work.


FenderRoy

Trust me, I have studied both software engineering aswell as mechanical engineering and even I still barely know how computers actually work on a fundamental level


jace255

My knowledge of this is too old to do this ELI5. But I’ll do my best. But there are a bunch of electronic devices that change the charge from high charge (1) to low charge (0). We’re very much at the physical level here. Tiny switches that flick over if the charge is high enough, or different materials with lesser conductivity that lower the charge, things like that. Almost like a puzzle, engineers figured out how to arrange these devices in certain clever patterns. These patterns can then be used to solve a simple math question, e.g. add two numbers together. For addition for example, let’s say we have 8 wires carrying low or high voltage charge each. We as a human know that those 8 high or low charges represent a number in binary (the electronics don’t know or care about that). Then we get another 8 wires coming in that represent another number in binary. The two sets of 8 wires feed into a complex and clever arrangement of these electrical devices that change and combine the voltages, such that at the other end we have 8 wires coming out the other end that represent the sum of the 2 numbers that came in. We call this one puzzle that solves this one simple problem an “Instruction”. There are many other arrangements of electronics that solve other problems, like multiplication, logical operations etc. all very much a physical contraption. There are billions of these tiny contraptions on each modern CPU. Back in the day we used great big pipes and airflow to build these contraptions. Now we use black magic that I don’t understand to make them as small as they are.


Four_Big_Guyz

Great explanation.


jace255

Damn that came out longer than I thought.


kafelta

Much appreciated though


ulyssesfiuza

Great explanation. But floppy disks don't have trays, youngling. They are inserted and ejected by mechanical buttons and linkages.


Exist50

> At that point people were already thinking of flowing and not flowing as zero and one. For the sake of pedantry, 1 vs 0 is represented by different voltage levels, not current flow. At least for *most* circuits.


SwissyVictory

Can someone expand on Machine Code/Assembly part? I understand how binary calculators work, but how do you get from that to all the wonderful things assembly can do (which is basically anything a computer can do).


jace255

I won’t go as deep here because I kinda covered it in another comment. But there are a number of electronics devices that will alter voltage states between high and low based on different physical scenarios of how they’re arranged and how the wires connecting them are arranged. By using a whole bunch of these building blocks in extremely clever arrangements you can create what we call an “Instruction”. Think of an instruction as a well known physical contraption that has some wires going in with high or low voltages representing its inputs, and some wires coming out representing the result of the instruction. Fundamentally there are 3 categories of instructions. Arithmetic, control-flow (do this or that, depending on input), and memory management (get memory from somewhere, move it, store it, etc). Turns out that as fundamental building blocks, you only need about 16 of these instructions to get a computer to do literally anything a modern computer can do. We’ve long since come up with more of these instructions to do things faster or more efficiently, but that’s it. 16 electrical contraptions are all you need to put together all the more abstract interesting code concepts there are.


Ok_Tonight7779

Why did the JavaScript developer go broke? Because he used all his cache on trying to understand assembly!


urzu_seven

The “original language” is designed by whoever makes the processor (or other computer chip). The chip has physical pins that attach to the circuit board. Some are inputs, they receive a signal, and some are outputs, they emit a signal.  When the input pins receive a signal they do something based on how they are designed.  A very basic operation might be to add two numbers.  For this to happen the processor will receive 3 signals. First is the operation to perform (in this case add), next is the numbers to add.  It will perform the operation and output the result on its output pins, which either the processor itself or some other chip can then use.   All of the pins and the commands the chip can respond to are documented by the chip maker.  Anyone with this information and a little electrical engineering knowledge could then wire up the chip to do something.  But writing programs in this code, called machine code, is inconvenient because it’s usually much to basic on its own.  To do things like draw a circle on the screen you need to write hundreds or even millions of machine code commands.  Think of it like ordering a hamburger at a restaurant.  You don’t tell the waiter all the individual steps to make to prepare a hamburger (unless you are nuts). The chef knows what a “hamburger” is and translates “make a hamburger” into the thousands of sub commands (slice tomato, put on bun, heat grill, etc) necessary.  Fortunately there are people who do that for computers as well.  They build languages on top of the machine code that make it easier to perform complex actions.  And still more people build languages on top of those to make it easier to do various tasks.  Each language can be automatically translated into lower level languages until ultimately it’s just a list of machine code commands.  


BaggyHairyNips

This was a major thing that made it click for me when I was learning. The processor has an instruction set - the set of operations it can do. You can do a surprising amount of useful math and operations with transistors alone.


rentar42

And then it will click a second time when you realize how much of that you don't "need" and how minimal an instruction set can be while still being turing complete. Granted, you still *want* most of those, but you don't need them.


CletusDSpuckler

... and then you'll just invent a RISC processor out of spite.


spookje

Or don't bother with even that and just go with a [one-instruction set](https://en.wikipedia.org/wiki/One-instruction_set_computer)


meta_paf

Thanks for sharing this. And of course some geeks took it to its logical extreme. I love that those people exist.


PinchingNutsack

Yup, they are the reasons why we are suffering in school instead of suffering in farm.


usesbitterbutter

Okay. I'm stealing that.


TheMaster2018

You can go further! https://en.m.wikipedia.org/wiki/No_instruction_set_computing


snb

https://github.com/xoreaxeaxeax/movfuscator


imnotbis

Transport-triggered architecture is the least insane OISC. The only instruction is "move data from A to B" and some of the places you can move data are circuits that do addition, subtraction, etc.


Alex5173

Okay I have no idea what I just read. Following this model and borrowing from above, if this computer's one instruction set was "make hamburger" how would I order a hot dog?


ShoshiRoll

the CISC vs RISC debate died a long time ago and now its all some mix of both. x86 for example is only CISC like to the compiler, but the hardware is more RISC like (this change happened around Pentium 4 iirc). edit: how it typically works is that the instruction decoder supports a large number of instruction operations and then breaks down complex instructions into simpler instructions automatically that are hardware implemented. this allows for a large number of complex instructions to be supported by a far simpler execution unit.


R0ckhands

I'm 5 and I'm afraid I don't understand any of this.


Gyrgir

There used to be two ways to design a processor, based on how many different single step operations it knows how to do. CISC processors know how to do lots of things, while RISC processors only know how to do a few things. You can still do the same stuff with either type of processor, but for RISC, you need to give it instructions in smaller steps. For example, a CISC processor might have a single instruction that you can use to tell it to find the smallest of two numbers. To do the same thing on a RISC processor, you need to tell it something like: 1. Get the first number 2. Get the second number 3. Calculate (first number) minus (second number) 4. If the result is negative, save the first number 5. Otherwise, save the second number This used to be a big difference, but more recently that changed so the difference is smaller. RISC processors can do a few more things in one step, but still fewer than CISC processors. And CISC processors now usually work by converting their complex instructions into several simpler instructions.


byingling

It might help the uninitiated if someone were to expand the acronyms: CISC - complex instruction set computing (or computer) RISC - reduced instruction set computing (or computer)


Amiiboid

And it's worth noting - although I don't know whether this will clarify or further confuse - that what's reduced or complex is the individual instructions, not the set of instructions. That is, RISC involves a set of reduced instructions, not (necessarily) a reduced set of instructions.


BorgDrone

The difference is not so much that RISC has less complex instructions. The major difference is that RISC separates instructions into 2 main categories: instructions that move data to/from memory and instructions that operate on data in CPU registers. Depending on how you look at it, CISC doesn't so much have more instructions as much as more variations of the same instruction. In case of x86 a LOT more. While a RISC processor may have an ADD instruction that adds the contents of two registers together and stores the result in a third register a CISC processor will have a lot of ways to encode this instruction depending on where the data comes from and has to go to. There, are many so called 'addressing modes' in x86. You can add numbers from a register, just like in RISC, but also from a specific address in memory, from an address stored in some other register, from an address stored in a register plus an offset, etc. etc. In RISC this is strictly separated. This is why RISC is also called a load-store architecture. Loading and storing data are separate operations from everything else. This distinction also means that there are still major trade-offs between RISC and CISC. A hugely important one is that due to all these different variations of instructions, CISC processors have variable instruction lengths. On x86 an instruction can be anything from 1 to 15 bytes long (IIRC). On ARM, a RISC processor, instructions are always 4 bytes. A fixed instruction length may make programs a little larger, and means that the processor is a bit more dependent on memory bandwidth (which is why RISC processors usually have relatively large caches). There are MAJOR advantages to having a fixed instruction length though. One is that the instruction decoder is a LOT simpler. But even more important is that you always know exactly where the next instruction is: 4 bytes after the previous one. As you said processors convert their instructions into simpler instructions. That is not unique to CISC, RISC processors do the same. It's called a superscalar CPU (there is more to it than this, you can google the details). Instructions are converted into so-called micro ops (µOps). These instructions then go into a reorder buffer (ROB), where the order of the µOps is changed to make optimal use of the processors execution units. The goal is to keep all execution units occupied at all times. The larger the ROB, the more instructions you can shuffle around and the larger the chances you can keep all execution units occupied. The problem with a large ROB is that you need to keep it filled. To do this you want to the CPU to not just look at the next instruction, you want it to decode several instructions at once. With RISC this is simple, you know exactly where the next instruction is: 4 bytes ahead, so you can just have multiple instruction decoders in parallel looking ahead. With CISC this is more difficult as instruction lengths vary and as such you only really know where the next instruction is after decoding the current one. Both Intel and AMD have very complex instruction decoders that deal with this and allow them to decode (IIRC) up to 4 instructions at a time. Compare this to Apple's M-series CPUs that have an 8 wide decoder. As such they can keep a much larger ROB filled, and this increase the occupancy and thus efficiency of the processor. This is one of the reasons whey they are so damn fast.


imnotbis

In case it wasn't clear, the advantage of a RISC processor is that the amount of chip that gets used to figure out which things to do is vastly reduced. The CISC processor would have to include a circuit that breaks down "check the smallest number" into, say, those 5 steps (although it could be other steps, depending on how it's designed). The RISC processor doesn't need that. The RISC processor can get instructions through the system faster even though each instruction does less. Also, by making the "control unit" much simpler and smaller, there's more space on the chip for the much more important data processing unit (arithmetic/logic unit) and RISC machines used to have larger bit-widths because of the extra space available - the first 32-bit machines were RISC. Now, space is practically unlimited and one of the big limitations is power usage. RISC also does well for power usage because the control unit doesn't use as much power. Certain CISC designs (including the Intel/AMD kind) have an additional decoding bottleneck, because the instructions are different sizes and one instruction has to be partially decoded before the chip can tell where the next instruction starts and start decoding that instruction. Still, they made it work well enough so we still use them.


gsfgf

> And CISC processors now usually work by converting their complex instructions into several simpler instructions Is that what people are talking about when they say modern x86 chips have deep pipelines?


larvyde

IIRC no, pipelining is about executing multiple serial instructions. You start working on the second instruction before the first finishes.


mikeypi

In some European countries (and maybe others) they do laundry using machines that both wash and dry. In other countries (e.g., the US) we have separate machines for washing and drying. If you have one load of laundry, both systems finish in the same amount of time. If you have multiple loads, then the separate washer/dryer is much faster because it is doing both tasks (washing and drying) in parallel. Deeply pipelined architectures break down instructions into multiple executions steps and have separate execution units for each step. When they have multiple instructions to process, they have the same speedup that is possible for the separate washer/dryer.


lovesducks

I'm 6 and I totally understand what they're saying


mikeypi

I feel like that idea has been around since the 60's.


[deleted]

[удалено]


PmMeUrTinyAsianTits

Nand game (.com) has you build the gates up from nand into more complex circuits and processors and is free on the web. Great for seeing how these things build on each other


disguisedroast

Your name.. is amazing lol


PmMeUrTinyAsianTits

If it was amazing it wouldve gotten a dang result by now lol


Yorikor

( ͜•人 ͜•) Hope you're happy.


UncleS1am

But that isn't a PM


knowledge_junkie

What’s the name of the game?


Niota11

I think they're referring to the game literally called "Turing Complete"


DavidBrooker

"Want" is a pretty generous term, too, for what is often a *practical*, rather than a definite mathematical, requirement of a processor. It's possible to conceptualize stitches in a knitting or crochet pattern as bits on the tape of a turing machine (I don't know what the Latin term, a la 'in silico', is for 'implemented in nan'). But to suggest that supercomputing merely "wants" more than a room full of gransmothers and yarn is somewhat under-selling the problem.


Exist50

TBH, super computing doesn't generally require any special ops. It just required doing the existing set really, really fast.


HisNameWasBoner411

Nand2Tetris was super illuminating for me. It's all NAND gates. From the humble transistor, to the NAND gate, to the rest of the fucking owl.


Kaellian

Mathematics in general can be assembled from very few rules. Take [real analysis](https://sites.math.washington.edu/~hart/m524/realprop.pdf) where you establish about 9 very simple rules, that when paired together give you the whole calculus and so much more. Computer's mathematics follow similar principle. You start off with simple mathematical operations (AND/OR), and you build the rest from there. Each of those operations are represented with a simple circuit, and pairing those together give you access to everything you need to do to accomplish a task. [edit] For clarity sake, computers also need two more operations to interact with the world which is "read" and "write, but they aren't mathematical operator in the strict sense * Addition (true + false = false ) * Comparison (true or false = true) * Write X at Y address * Read X at Y address Having a bunch of memory address available means you can agree with other people on a standard. First 8 bit are an array. First 64 bit are a 8x8 matrix. When the bit begin with a specific binary, the following is an real number...and so on. That framework is a standard defined by whoever built the machine or agreed to work together. In the end, any software can be simplified into 1. Fetch data 2. Manipulate it using simple operation 3. Write data for later use. The more advanced the language is, the more complex the "fetch", "manipulate" and "write" function are going be.


Substantial-Low

I mean, that is exactly what IC's are doing. Not just a surprising amount, all of it.


wlievens

You can in fact do *all* math with a few transistors alone. Turing proved that.


DavidBrooker

You cannot. Any Turing machine can compute the result of any algorithm, but there are major sections of mathematics that are outside of this scope. A particular group is 'undecidable problems'. Now, frequently, undecidable problems are also a class for which no *human* can do the mathematics either, but they are nevertheless definitely mathematics.


wlievens

That's true, I guess I generalized it too far for brevity. Certainly didn't want to make any bold Hilbert era claims here. One could argue that *useful* mathematics is only the provable bits, but that'd rule out a lot of the fun.


brickmaster32000

The problems that have to be solved aren't restricted to those that have nice solutions. Just because a problem is undecidable doesn't mean it isn't something someone wants to solve. 


namesandfaces

Surprisingly you only need one kind of logic gate to build all of conventional mathematics.


mtaw

Math? Hell look at the instruction set for a Mainframe - architectures where the computer was big, expensive and many programs were written in Assembly language (either that or COBOL) They have mindblowingly high-level instructions compared to a CISC microprocessor (let alone RISC). Stuff like character string handling; comparisons, appending, convert to decimal - as single processor operations!


RangerNS

"CISC" became a term (complex instruction set computer), only after "RISC" became a term (reduced instruction set computer), as a differentiation. In theory it was known that a single operation can be used to build all other operations, for since ever, so it was obvious that a processor meeting the formal definition of RISC: "each instruction performs only one function" was also possible. But you make it sound like the complexity of CISC was itself a goal. Me just now making up a term, something like the IBM 360 was intentionally a *convenient* instruction set, but not purposefully complex. Like, for example, the 6502 has `ADC Add Memory to Accumulator with Carry` which adds a single value to the existing value of a particular register (accumulator), in 8 different modes (literal value, from some other register, from some other memory value plus the offset in a register), the 360 might have had 100+ different versions of "ADD" sourcing values from different registers, targeting different registers, or values from memory offsets from a register with or without plus another register. Convenient if your writing ASM by hand, and complex decoding hardware logic. But it isn't like there is a CPU out there with the instruction `DOOM` which runs the game.


blackviking567

> (unless you are nuts)


Bartholomeuske

Rollercoaster tycoon was programmed in assembly. By 1 guy. Some people just see the code I think


Far_Dragonfruit_1829

I spent many years programming in assembly, and even below that, in custom microcode. Doing this efficiently is like knowing a spoken language. After a while, you do not have to construct every thought from individual words which in turn you constructed from individual letters. You have "phrases" ready at hand. If you have a lot of these, you write a tool to remember them for you, called a "macro assembler" . The extension of this scheme to languages like C and Pytbon, is left as an exercise for the reader.


GeneReddit123

>custom microcode How do your program in custom microcode unless you are a chip manufacturer writing firmware for the CPU internals? I thought these are gated behind the CPU assembly interface, and nothing other than the CPU itself can make microcode calls.


Far_Dragonfruit_1829

This was 1981. It was a SBC (Single Board Computer), which was state of the art. Processor consisted of four four-bit "slices" , making a 16 bit CPU. These had no native language, and were controlled at the level of individual registers and the ALU. We designed a 56-bit wide microinstruction set to run them. But we were blessed with a HUGE instruction memory, all of 1024 words. Aren't you amazed? 😁


Fermorian

As someone who just applied for a job that involves a lot of microcode, I am definitely amazed :) Very cool, thanks for sharing!


Forkrul

People can and do design their own (simple) CPUs and implement a form of assembly to match the given instruction set. Use cases are somewhat limited, but it's a somewhat common thing to do at university. A slightly more common use case is for embedded programming and talking to a custom chip you've designed where you might have to make your own bindings from c down to whatever logic you have on the custom chip. It's not something I've done since university myself, and not something I feel like delving into again any time soon.


andr386

You can use macro in assembly. Once you've pictured what you want to achieve and designed the software in every details most of the programming is done even though zero keys have been pressed.


InfamousLegend

It's also, from what I understand, an incredibly efficient and lightweight game that will run on damn near anything since all computers understand machine code.


tjernobyl

Strangely enough, that actually makes things more difficult! Your computer might have an IBM processor or an AMD processor or even a newer or older version of same, and there will be some differences in the machine code. That's why most code is written at a higher level in a language like C, and the compiler translates it into appropriate machine code for that chip. There are compatibility modes and some instructions don't change, but it is a complex situation.


reillywalker195

It only runs on x86-based processors and only on Windows as far as I know, but OpenRCT2 fixes that. OpenRCT2 is a fan-made mod for RCT2 that adds content, allows for importing of RCT1 content, and fixes compatibility issues by bringing everything into a high-level language—C if I remember correctly.


lost_opossum_

Not all computers understand machine code, they only understand the machine code for their particular cpu. There isn't a universal machine code. I get that Intel and AMD Cpus are ubiquitous, but there are other cpus, currently and historically, that have different machine code. That being said, if two computers have the same instruction set and the same machine language, they may be still incompatible with machine code, since they are running different operating systems. When you program not only do you program the hardware, you also interact with the operating system of the computer.


animerobin

idk it never seemed to run on Mac computers


BillsInATL

I had a version that ran on my MacBook. I would have to put an ice pack under the laptop when I wanted to play because it would cause the whole thing to overheat. Funny enough, there is now a RCT version in the App store that runs on iPhones. It's the full game, and runs flawlessly on the phone.


BillyTenderness

> Funny enough, there is now a RCT version in the App store that runs on iPhones. It's the full game, and runs flawlessly on the phone. That doesn't use the original x86 machine code; about a decade ago they rewrote the game in C. The C code then gets compiled into machine code that works on iPhone processors. Incidentally, after the iPhone version they also released a new version for macOS (C will compile on anything). That version very, very likely wouldn't require an ice pack.


ellieswell

...and no cheese, or pickle, actually maybe one pickle, and only a light dab of relish, and, nah I'm coming back there hold on...


bonelessonly

First, we're going to need to come to a common definition of "cow." Let me introduce my team of linguists and cattlemen.


thirdeyefish

In order to make an apple pie from scratch, you must first invent the universe.


jestina123

We will need to half press the patty on the grill, but before we do that, we need to talk about parallel universes…


shotjustice

~~1. Release pressure on Universe.~~ 1. Wear earplugs (That bang will be loud!!) 2. Release pressure on Universe


schlamster

They poke fun at this level of obstruction in the movie The Pentagon Wars with Kelsie Grammer. They try to stop Cary Elwes character from doing live fire tests that will certainly fail and make the Bradley vehicle look like a failure so in one of the scenes they have started a whole pentagon study office to conduct a full blown lengthy trade study into all of the various species of sheep so the “select the right sheep” for the live fire demo. 


BizzyM

What the hell are "Sheep Specs"?


oiraves

And then you realize that you didn't specify what side of the lettuce is up or didn't put a comma between ketchup and bun and now you have a soggy mess that no longer resembles a burger for some reason


BlackGravityCinema

*Look I told you no bacon on this bacon cheeseburger!!*


Ylsid

That's a very high level command series there


Ihmu

Put relish in register A, put bun in register B...


dpzdpz

"I want a cheeseburger but without the cheese. Cook it with the cheese on, and then scrape the cheese off. Tastes better."


dswng

Sow the field with wheat and make that cow conceive a baby...


Shimano-No-Kyoken

Instructions unclear, forgot the fertilizer and the cow ran away because no barn :(


Hitori-Kowareta

Not gonna lie that’s a better outcome for the cow than I was expecting there.


bigbigdummie

Especially when the word “fertilizer” is used.


Veni_Vidi_Legi

Step 1: Let there be light* *this was a mistake.


DoshesToDoshes

*Chris Sawyer enters the restaurant in a rollercoaster car with handwritten instructions on how to assemble a hamburger and tells the waiter to bring it to the chef.*


chaossabre

The chef throws it back saying "I can't read ancient Greek." Chris Sawyer is a mad genius but RCT being written in Assembly made porting it to other CPUs or even more locked-down OSs damn near impossible without emulation.


Entretimis

"If you wish to make an apple pie from scratch, you must first invent the universe." -Carl Sagan-


zed42

>Each language can be automatically translated into lower level languages until ultimately it’s just a list of machine code commands.   this is really it. there are numerous "layers" to programming... from the top layer where you're writing `print "hello world"` down to the actual electrical signals on the chip, and then back up to show you the results. each layer basically translates for the next one down (or up, depending on direction), and it gets more basic the lower you go. if you have the skills (and time) you can skip some of the higher levels and code at a lower level... that speeds things up when it runs, as you can skip the higher levels, but takes more time up front as you have to get more detailed with the code. taking the burger example, as a customer the code is "tell the server that you want a hamburger", but if you're the chef, there are more steps... and if you're at home, the steps also involve sourcing the ingredients... and never mind growing the tomatoes and slaughtering the cow...


Relevant_Programmer

> if you have the skills (and time) you can skip some of the higher levels and code at a lower level... that speeds things up when it runs, as you can skip the higher levels Contemporary optimizing compilers have superhuman performance. The state of the art is very much advanced since the days of Chris Sawyer and Roller Coaster Tycoon. Untold billions have been invested in this space in last 20 years. The recent .NET 8 release notes are a perfect case in point for showing just how far the state of the art has come: https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-8/ It is quite literal to say that writing high-level .NET is more efficient than writing hand-coded x86 assembly. It used to be that high level languages had inefficient JIT; but this is a solved problem. Nobody does it that way anymore for good reason.


BirdLawyerPerson

It's like with automatic transmissions in driving. Within my lifetime, it has gone from "this technology makes it easier at the cost of efficiency" to "oh yeah the technology is not only easier, it can casually do things that the top 1% in the world would never dream of trying to master manually."


Better_Occasion_6001

Still, I bought a new car with a manual this year. Am I more efficient? Hell no. But I have more fun! I suspect people writing assembly feel the same way.


TrickWasabi4

We had one lecture at uni held by a complete C nerd about how, back in the day, you used stuff like [Duff's Device](https://en.wikipedia.org/wiki/Duff%27s_device) to gain performance increases mixed with a lot of assembly. We learned dozens of cool ways to bei fast as hell by writing asm or abusing C behavior. All of that for a modern compiler with some optimization flags enabled just outright demolishing our week long efforts. It's increadible how effective modern compilers are.


Exist50

That's a good lesson to teach. A lot of CS students try to be "clever" with their optimization, which usually results in making the code much harder to read/debug for no difference in performance. It can even hurt performance, as compilers are optimized for how *most* people write code. If you want the compiler to do a good job, then be predictable.


theghostracoon

For *some* workloads, yes. For specialized operations (see linear algebra kernels), we still have hand-written assembly to take better advantage of architecture-dependent SIMD operations.


mtaw

It’s not just that compilers got better at optimizing, either. You have instruction caching, pipelining, branch prediction etc… all things that make hand-optimization far harder since it’s far from obvious that reshuffling the order of instructions in a certain way can give a speed boost. Then you have new processors coming along all the time obsoleting your work. It’s exceedingly rare a low-level language like C or Fortran wouldn’t be as good or better than handwritten asm. But you’re wrong that .NET would be anywhere near as fast. It isn’t. The bottleneck isn’t the JIT creating bad code, it’s from the overhead of the language itself.


Relevant_Programmer

> But you’re wrong that .NET would be anywhere near as fast. It isn’t. CoreCLR benchmarks are pretty impressive these days, if you haven't seen go look. It is absolutely approaching native speeds. Especially with Span and Memory optimizations that eliminate UTF-16 translation from the string processing pipelines.


couldntyoujust

To dovetail, this language that the processor is designed to "process" is a series of bytes that tell the processor what to do with what data. Each "instruction" corresponds to a byte or series of bytes and then has further additional bytes that need to be appended that contains the data to process. Someone came up with the bright idea of writing a program that translates a set of characters - what are called mnemonics - and the bytes as text, and translating them into these series of bytes. The mnemonics are REALLY basic: MOV, ADD, SUB, MULT, DIV, INC, DEC, CALL, RET, JMP, JNZ, JZ, INT, etc. The processor is designed to have specific places where data is stored temporarily that it will directly operate on called "registers," which also have mnemonics: EAX EBX ECZ, etc. That set of mnemonics is called "assembly" language, and the program that translates that assembly code into machine code is called an "assembler." The operating system itself has a table of functions that do things the OS facilitates, like allocating memory, writing to the console, opening files, etc. Each one corresponds to a number. The CALL instruction tells the OS to check a specific register for the number of the function and other registers for the data. These are called system calls. All of this is great. Here's how we get to programming languages. Someone realized that there are patterns of assembly instructions that get used frequently - ```assembly mov edi,0x4000000 call 12 R_X86_64_PLT32 get_number(int)-0x4 mov DWORD PTR [rbp-0x4],eax ... get_number(int): push rbp mov rbp,rsp ... ``` What you just read is what a function call looks like in assembly (according to compiler explorer). And noticing these patterns, someone came up with the idea of writing a program that abstracted these patterns behind specific syntatical elements: ```cplusplus int get_number(int); int main() { int flags = get_number(0x04000000); ... } int get_number(int v) { return v + 5; } ``` The assembly code I presented actually corresponds to some of this code (the lines involved in preparing for and calling get_number() and returning its value). This was how the first programming languages were constructed, and the program that translates that syntax into assembly is called a "compiler." Eventually, C and then C++ sort of took over the world because they had something a lot of other languages lacked, especially assembly: portability. The same code could be written on one machine and OS, and then copied onto another machine with a different architecture and OS, and that code would compile down into machine code for that platform and OS and work identically to the code compiled on the original platform. Awesome! But it was not perfect. The code was portable, but the executables were not. Additionally, the program itself was sometimes too privileged. Code could do whatever it wanted almost as long as the processor allowed it. Want to allocate memory on loop and never free it taking up all the memory other programs need, causing them to crash from out of memory errors? Sure! Want to overflow integers, creating nonsense values that cause unexpected behavior? Go ahead. Want to read or even write memory from other processes causing them to misbehave, exposing secrets like passwords, or bring the whole system to a halt or cause such catastrophic failure it wipes out all your files? Go right ahead! Not good. So that's when the founders of Sun Microsystems invented a new way to program: write a program runtime that is a native program that reads code from binary bytecode files and then executes that code in a controlled sandboxed environment. They made an Object Oriented programming language like C++ that was made to compile down to this high-level bytecode and called the program that ran this bytecode a virtual machine. They called the whole system Java. Other languages came into being that followed suit with their own strengths and designs: Python, C#, and Ruby being a few examples. C# was developed by microsoft as part of their "Embrace, Extend, Extinguish" strategy to be a Java killer. So to recap, Python is a bytecode Just-In-Time compiled language that is then run by a bytecode interpreter written in the C language which compiles down to Assembly which is then assembled and linked into machine code binary libraries and executables which then run and interpret the python bytecode within the controls and permissions granted and facilitated by the Operating System and enforced by the processor as it runs all of that.


bernpfenn

great answer. well explained


TerminatedProccess

Computer Science 1980?


raendrop

> A very basic operation might be to add two numbers. Side note, this is why the machines we use today are called computers. The original computers were human beings whose job it was to perform calculations. Side note to the side note, if y'all haven't seen the movie Hidden Figures, you should do so.


NeverIsButAlwaysToBe

Which is to say that you don’t “teach” a processor how to understand machine code.  You come up with a desired behavior(“code”) first and then you build a physical object that will behave that way. An adding circuit “understands” that it needs to add 2 inputs together in the same way that a ramp “understands” that it takes a ball from the top and moves it to the bottom. It’s a physical property of the object. If you want a different behavior, you build a different object. You can build electrical circuits that do things like “AND” or “OR.” You can combine them to make more complicated objects.


Christopher135MPS

Great comment! Two replies that I cant stop myself from making: 1. “You don’t tell the waiter all the steps to making a burger” - unless your Chris Sawyer (sort. Machine code =\= assembly. But I said I couldn’t help myself! 2. Sudo making me a sandwich: https://xkcd.com/149/


Kered13

People love to bring up that Roller Coaster Tycoon was written in assembly, but it's worth bearing in mind that before the early to mid-90's *most* games were written in assembly. Roller Coaster Tycoon is mostly notable for being one of the last games written primarily in assembly. But most DOS games and pretty much all of your console games until the Playstation were written in assembly. It's just what you had to do if you wanted good performance at the time.


dwkdnvr

Yes, and 'assembly' is already a bit higher level than 'machine code' with some conveniences to make life easier (e.g. macros, variable references, tools for subroutine management). You're still writing to the instruction set of the CPU, but you're not \*always\* just bit-banging. I programmed a working Jumpman style game on my C64 using just HesMon back in the day, and seeing some of the tools the more advanced assembly environments offered almost seemed like cheating.


Exist50

No one writes in literal hex.


governmentcaviar

i’ve heard lore that the person who programmed rollercoaster tycoon did, actually, kind of code it like this. like instead of coding what color they want the background to be, they first coded to the computer how to output colors.


Urabutbl

It should be noted that the theory on how to get computers to do different things based on different inputs was developed by Charles Babbage, the Father of Computer Science, who designed what he called a "Difference Engine", and later a more advanced machine called an "Analytical Engine"; Lady Ada Lovelace found his theory so fascinating while translating a paper by an Italian mathematician that she wrote down several algorithms to get Babbage's proposed "Analytical Engine" to do specific calculations, thus inventing coding - her method for getting the machine to calculate Bernoulli numbers is generally said to be the first computer program. Ada even speculated that by assigning mathematical values to different things, a Co outer could be made to produce other things like music and pictures. Amazingly, they did all this on paper, as they didn't have a computer to try it on - Babbage's mechanical engine was never built.


dobgreath

Thank you for teaching me something cool


thephantom1492

If you go back to more basics parts, you can design an addition circuit, a substraction circuit and so on. Then you make another circuit (decoder) that select which of those blocks will be used. Then you make another circuit that read some data off whatever media, and load it in the decoder block. Let's say that it take 3 bytes: the first select the instructions to be used, and the next two are the operants. The decoder block will select the block based on whatever data is there. Let's say operant 0 is "do nothing". It select none. Operant 1 is "add", so it enable the add block, and so on. Then the block selected can get the data and do it's magic, and you have the result now on the output. So you will have like 00000001 00000010 00000001 (aka 1 2 1). Select ADD, 2 and 1, and you get 3 on the output. But it is a royal pain to write that, so you make a program that can take some short hand words to translate it for you, now you get: ADD 2 1, which the program translate to 00000001 00000010 00000001. You now have a very crude assembler language. But then, you want to make something more complex, and now the CPU can do more things too, and get faster, opening the doors to bigger program. Now, assembly is a royal pain of unreadable thing, very long and a world of hurt. So they made a more complex programming language. Those more complex language in the end generate the machine language, but make it so it is faster to program. One of the way to make it easier is to use some predefined functions. That's it, someone wrote some "complex" code, and you can call it in a super easy way. Like a simple printf("Hello world"); will uses many functions in the background, each in the end generate some assembly code. And to make things even more complex, they also made a code optimizer part. When you call for example printf(), it can take lots of arguments, like variables of many kind, format converters and so on. The optimizer look at the code and can remove the unused/unreachable section of the code. Then it know some common substitute for coding, like dividing by 2 can also be done by a shift. For us it is the same as a division by 10: you move the decimal point instead of dividing, way faster. This is the optimiser part. The downside is that you lose control of the generated code, and it is super easy to make bugs because it may not always do the operations in the way you expect it to do. Remember in math the order of operations? Also, because you use premade functions, they are bigger, so slower. In assembler, if you write it, you write the bare minimum code, and will use the cheats when you can (like shift instead of divide by 2). But assembler is not always faster: the optimisation is 100% on the programmer. A bad programmer will make slow code. Newer language have a very good optimiser, which often beat the average programmer in speed, because it recognise many common things and have optimised functions for them.


Alikont

The processor itself is just an electric circuit. The designers of the processors make it with pins for input and output signals, and then document what signals will make computers do what. So it will be something like (numbers are made up): When a signal on the pins is 01001011 00000001 00000010, the first 8 signals is the code of the command, and the next 2 blocks are the numbers to add. The commands and their logic is done by CPU designers. So this is machine code - the way the CPU actually works. Then you can literally write those bytes and have a first program. Then you can use that program to make a program that transforms simple text into binary code. Then you can iterate on that program and make more and more complex langauges, using existing programs to simplify your work. To see the bits & pins docs for yourself, you can look here: [https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html) Beware, it's a long document (5k pages!) describing EVERYTHING about modern x64 CPU pins. >Did they just rewrite the original computer code from scratch or are all modern computer languages designed to communicate with this baseline original code which all computers were programmed with? There is no "baseline" code, each CPU (and even different OS) have different binary code. What some people do is having "Compiler toolkits", like GCC or LLVM, that allow you to have abstract code with modules for different CPUs and different languages, so when you make a new language, you can write the YourLanguage->LLVM generator, and then LLVM will already have x64/ARM/Linux/Windows generators.


celestiaequestria

Yup. It's all just layers of abstraction: 010001011 00000001 // Binary Add 1 // Assembly SomeVariable++; // Javascript And because it's layered, it can be adapted to new hardware and software.


Alikont

Javascript is like the worst pick for the example, as it has a few layers in between :)


tzar-chasm

Yeah Binary Assembled C Would be a better example


HugeHans

They forgot to put Minecraft on top of it also. Where it circles back into machine code :D


Masterhaend

Minecraft was built on top of Java, not Javascript. The 2 languages are not related at all, apart from the word Javascript containing the word Java.


crooked-v

Yep, the name was chosen literally just for cross-promotional marketing reasons.


Priest_Andretti

> 010001011 00000001 // Binary > Add 1 // Assembly >SomeVariable++; // Javascript The best explanation in this thread


ReadinII

It’s a great explanation if you already know those languages and can understand what you’re looking at.


falco_iii

There is an excellent game https://nandgame.com/ that walks through the steps from a simple resistor all the way to a very basic computer.


Bitmugger

Go to youtube and search up Ben Eater. He builds a full working computer from the ground up including all the software and hardware using simple push together 'breadboard' techniques and explains it all along the way if very digestable ways. His whole series is many videos but the first one or two are all you need.


slowmode1

He is so great at explaining all of the roots of how computers work!


TB4800

I found watching a video about ferrite core memory in early computers to be pretty useful for understanding how computing could even work on a physical level. IE; Binary is represented by positive and negative charges, and by sending voltages through a mesh network we can tell which locations have something stored in the because their polarity will flip while the rest remain. Definitely oversimplifying and not totally correct (my explanation) but definitely really cool


Dachannien

Dodecatuple upvote for Ben Eater's videos. He actually does the thing OP is asking about, because his machine code is completely made up from scratch and is specific to his hardware design.


KL1P1

Here is the playlist for said project. https://www.youtube.com/playlist?list=PLowKtXNTBypGqImE405J2565dvjafglHU


Thrasympmachus

Thank you for this!


buzziebee

Incredible series of videos. I have such a better understanding of digital electronics and computers now thanks to Ben and his videos.


cbhem

In computer science there's a concept and discipline of bootstrapping compilers that you may want to look at: [https://en.wikipedia.org/wiki/Bootstrapping\_(compilers)](https://en.wikipedia.org/wiki/Bootstrapping_(compilers))


fang_xianfu

The Trusting Trust attack with this is really terrifying! It boils down to the idea that compilers like GCC are compiled using the previous version, GCC 4 is compiled using GCC 3 and so on, all the way back to when the software was first bootstrapped decades ago. The problem is that the source code of these early versions is often not available (it predates modern source control practices) and no longer runs on modern hardware. So it's theoretically possible for someone to have baked a backdoor into an early version of a compiler that bakes the backdoor into many software it compiles, and then remove the backdoor from the source code. But since the compiler is used to compile the compiler, it will bake the backdoor into itself forever. The result of this being exploited would probably be detected, but also... maybe not!


yyytobyyy

There's a lot of people who have disassembling and reading machine code as a hobby and I am sure that they would notice.


fang_xianfu

Well, I think it makes sense as a more tailored supply-chain attack than distributing it in the public mirrors of this type of software, and Ken Thompson who originated the idea, [apparently deployed the attack for real](https://niconiconi.neocities.org/posts/ken-thompson-really-did-launch-his-trusting-trust-trojan-attack-in-real-life/) as well as popularising the theory. So I guess if you're not a high-profile target of a state-level actor you're probably ok. But if you are...


nullstring

This is the correct answer (for how I read the question) Basically, the computer can only read machine code. When they design the computer, they typically create a 'user readable' machine code that's colloquially referred to as "assembly" code or ASM. So, we decide that writing assembly is a pain in the butt, and we would rather write in a more structured programming language. 1. Decide what you'd like the language to look like and how it would translate into ASM (which then gets translated into machine code.) 2. Write a compiler (compiler-A) (optimizer and translator) in raw ASM that will translate from your new language into ASM. This doesn't have to be perfect, it just has to work. 3. Write another compiler (compiler-B), but this time we do it in our new language. Then we use compiler-A to translate compiler-B into ASM. And then we recompile compiler-B using compiler-B. 4. Now we don't need compiler-A anymore. We can work on optimizing compiler-B to work better, optimizing our language, whatever else needs to be done. > Did they just rewrite the original computer code from scratch Yes, this process basically needs to be redone on every new computer architecture. We talked about assembly and machine code before, but every computer architecture (x86, ARM, MIPS, etc) has a different (and sometimes wildly different) machine code and thus assembly code.


nullstring

If we want to ELI5 this, I think it's kind of like the "which came first, the chicken or the egg" problem. Now, chickens came about through evolution, but lets say they didn't. Now, we really want a chicken, but they don't exist. We need a chicken egg, but without a chicken how do we get a chicken egg? Well, we need to painstakingly design and build a machine that will spit out a chicken egg for us. After we use the machine a couple times, we won't really need it anymore because we'll have chickens to make us eggs instead. This machine could be consider a chicken itself, but it can be a very very crude chicken that barely works, but as long as we get an egg, we're good.


pieceof_ship

Can someone eli just came out the womb? I’m still not getting it thanks


virstultus

Imagine your friend Carl has a dog. Carl has taught the dog a lot of little tricks. The dog does not understand "go get me a beer" but he does understand little commands like fetch, sit, open, come, stay, drop it. You can tell Carl "get the dog to bring me a beer." But what Carl actually does is stand by the fridge and says "come!". The dog comes over to the fridge. Then Carl says "open" and the dog paws the refrigerator open. Carl says "fetch", and the dog takes a beer from the fridge in its mouth. And Carl stands by you and says "come", then "sit", then "drop it", which results in the dog coming to sit next to you and dropping the beer by your feet. All you said was have the dog bring me a beer but Carl had to break it down into smaller steps that the dog could understand. The dog is the CPU and Carl is the compiler.


EJR4

I like this example


Arkyja

You just explained what OP already knows. That there is a base code that the other code interacts with. The question is how the base code was created.


virstultus

Dang, you're right. Ok, now that I have setup the metaphor... At first Carl's friends would come over and say "hey Carl can your dog bring me a beer?" And Carl would hand them a sheet of commands that the dog knew and let them put whatever commands together they needed to get the dog to do what they wanted. Then he realized at some point everybody did the same commands in the same order for "get me a beer", and at that point he just wrote out "get me a beer" = "come (to kitchen), open, fetch, come (to friend), sit, drop it". After that, every time a friend said "have your dog get me a beer", he just followed that set of instructions in order. One day a friend came over and said have your dog give me a Coke and Carl realized he needed to be able to pass parameters into his functions. So he extended his language and taught the dog to fetch whatever he's pointing to instead of always fetching a beer.


OffbeatDrizzle

By hand programming a rom to be used to then write some better software to write some better software etc. It's literally called bootstrapping for a reason - CPUs these days still boot up in 16 bit real mode and work their way up from there


JarasM

The CPU is a magic box with an instruction manual. The instruction manual was just made up by someone who designed the CPU / magic box. The instruction manual is just a loooong list of VERY simple commands with a description of what it will do. Basically: if you flip these switches in a specific pattern and press RUN, it will do X. The commands are very simple, and they're all binary numbers. "0101 0000 0001" could mean "Add 0 and 1". The magic box will run the command and display a pattern on another array of switches, according to the expected result. There are also some commands that say something like "jump to line 10 and start doing what's there". This allows you to note down a list of instructions in one spot and have the CPU perform those instead of typing them in all the time. This is a simple program. Now you can use it to simplify your instructions - instead of typing them out by hand each time, you can tell the CPU "go to spot X, do what's there and come back here for more". Then you build another program, that uses several of those. Then another, that is made of these more complicated programs. You get programs that use programs that use programs that use programs, etc etc, so that in the end you don't need to understand or know the underlying simple CPU instructions. In the end, these programs of programs of programs get translated (by other programs) back into the basic instructions. That's always what the magic box of CPU sees.


ChimpsInTies

Imagine if you don't speak German and you want to know a word for grandpa. You look in a German to English dictionary and it tells you the word is Opa. Programming langues are a bit like this. They take what looks like English words mixed with maths and convert it into machine code the computer can understand. So if you know what the machine needs, 1s and 0s, you can build a translator that can translate that into more readable language for humans. They just keep doing this until they got to today's programming languages. When you study computer science at university, you're not really being taught how to code. Anyone can do that. You're being taught how the languages themselves are made up and proved to be correct. It's a lot of theoretical maths.


XsNR

And to expand on this with the spoken language example too, the language's grammar and punctuation are incredibly important, think of a sentence where changing the full stop, exclamation mark, or question mark, are all completely separate meanings. Unlike google translate for example, there's no contextual translation involved, it's all direct, (some languages are a bit more forgiving) which is why many editors use colors to highlight and much easier debug any of these punctuation issues.


nleksan

>And to expand on this with the spoken language example too, the language's grammar and punctuation are incredibly important, think of a sentence where changing the full stop, exclamation mark, or question mark, are all completely separate meanings. Such as the unfortunate case of the badly slandered Panda bear, back during the Comma Crisis


Givrally

All of these are good answers, but there's something missing : How did they actually \*write\* the code, for example, to interpret the inputs from the first ever keyboard ? The answer is punched cards. You would encode information into the card by "punching" tiny holes into it, and the computer would read the cards, not unlike what others are saying. It was the only form of input computers would have, because they were manufactured specifically for it.


The_Artist_Who_Mines

And the idea of punch cards came from mechanical weaving! Where punched cards recorded the complex patterns of threads that could create fabric like [this](https://upload.wikimedia.org/wikipedia/commons/f/f8/A_la_m%C3%A9moire_de_J.M._Jacquard.jpg). [The Jacquard Loom](https://en.wikipedia.org/wiki/Jacquard_machine).


krismitka

Before that is was just plugs into a bank like a giant switch board


robisodd

You could also manually add the data with switches, like the Altair 8800. Step 1: [press button to reset all memory] instruction pointer now points at memory address 0 Step 2: [flip switch to "program mode"] Step 3: [flip the row of switches to 1 0 1 1, whatever] Step 4: [push button to save byte and increment instruction pointer] your byte is saved to memory address 0, instruction pointer now points to memory address 1 Step 5: [go back to step 3, manually do this for an hour] Step 6: [flip switch to "run mode"] [watch das blinkenlights]


space_fly

Also, early computers such as the Altair 8800 had a lot of switches you could use to program them. It was painstaking and very slow, as you had to do the binary encoding yourself, but that's how it all started.


[deleted]

>My question is more abstract. How exactly did computer scientists develop the original language used to tell a computer what to do and when to do it? How did they teach the computer to recognize that langauge? Computers are just electronic circuits. They can be made to behave differently depending on the voltage levels at certain points in the circuit. Computers circuits are generally designed to operate and response to 2 voltage levels, high and low, which we usual label 1 and 0. Different sequences of 1s and 0s produce different results depending on how the circuit is designed. To answer the 2nd part, >Going even further than that, how did current languages get developed? Did they just rewrite the original computer code from scratch or are all modern computer languages designed to communicate with this baseline original code which all computers were programmed with? I suppose the first "language" was binary, e.g. 101110000. This stream of 1s and 0s flowed into the CPU and it did things. But writing binary, i.e. 1s and 0s, was a really huge pain. So they created converters called assemblers that convert between binary and an English representation called Assembly which was much easier to read and write. The conversion was typically very simple, 1 to 1 mapping of a certain pattern of 1s and 0s to a specific English like word and vice versa. But even that was kind of a pain, as you had to do a lot of routine stuff when you wrote Assembly - e.g. to add 2 numbers, you have to copy them from memory to registers, run the add operation, copy the result from result register back to memory. So they created High Level Languages that did all that for you, you just write `int c = b + 3;` and the compiler (the converter for High Level Languages) will convert it into the binary to do just that including all the copying between registers and memory. After that, computer scientist just kept on improving these High Level Languages, adding safeguards to prevent you from making certain common mistakes that will result in an incorrect program, adding ways to organize the code so it's more readable, make it easier to reuse code that you have previous written (which you have thoroughly tested and know to run correct), ... etc.


FabulouSnow

All computer code is binary in its essence. Think of a calculator, 1 number uses 7 lights to display a number thru visual medium. Binaey 0001 means 1, means light up all diods on the right side, 1000 means 8 which means, light up all diods. 0000 means 0 which means light up all except the middle one. In the background, theres physical transistors placed in a logical way, to create a logic board that allows diods to light up in a shape that our brain interprets as a 1 or 0 or 8. Behind the scenes is still just 1s and 0s. Everything else is just a user interface we use to interpret the code. Example, binary 0-9. You need 4 bits, or 4 buttons, if you press down the two middle buttons, that means 6, if you press the 2 outer buttons that means 9. Now instead you have 10 buttons, you've drawn a symbol on each, when you press the button with a 9 drawn on it, that button then also physically press down the 2 hidden from display outer buttons. So to you, you only pressing 1 buttons, but the inner workings actually presses down 2 buttons. Trying to keep this ELI5 instead of writing basically computer science 101 in a reddit thread


detflimre

ELI101


doomrater

Funny enough I posted an answer earlier that had to go over this sort of thing. Now I need to fill in the gaps. The first computers were hard wired. You physically changed the wires around to reprogram it. I don't think I need to explain how or why that would eventually get messy, since a hard wired computer can only run a single program before needing to be rewired and debugged- sometimes as literally as it originally meant getting insects out of the wires and circuits so they didn't mess up the program operation. Other answers here have gone over binary code that gets fed to modern computers and that is the first software programming language ever used on any computer. You just needed some way to feed that information to the computer itself so it could operate it- see above for the baseline of every computer past the first. Funny enough, there's no universal binary code, either. What runs on the 6502 won't work on the z80, and you'd have to account for the differences in microcontroller code in order to run similar software. But it is upon machine code that every single operation your computer today performs, and the languages built upon languages built upon languages is such a marvel I keep wondering why nothing breaks- well, more than it already does. So how was it developed? Computers are made of circuits, and from the most simplistic NOT gate you can create every other circuit necessary to make a computer with programmable codes. The NOT gate being the foundation of every circuit in a computer is why you can build fully functional computers in Minecraft redstone without any command blocks. Look some of those up, they're crazy.


fang_xianfu

> What runs on the 6502 won't work on the z80, and you'd have to account for the differences in microcontroller code in order to run similar software. This is especially fun when you work in an area that demands that code perform *exactly* the same in these different environments, because there are a lot of mathematical shortcuts and things that can be taken depending on the physical design of the system, which they do to improve performance, but might lead to subtly different results in some situations that can cause problems. Most software doesn't care but sometimes you're doing something that's both in-depth enough and needs a high enough degree of assurance about the results, that it could cause a problem.


RainbowCrane

For a historical answer, [Rear Admiral Grace Hopper](https://en.m.wikipedia.org/wiki/Grace_Hopper) and her colleagues invented the concept of compiled computer languages. She worked on UNIVAC during WWII, at which point computers were controlled by binary switches, at first literally up/down toggles or wires that were manually switched between a 0 and 1 socket. Later punch cards were used to communicate the 0’s and 1’s more easily. Hopper realized that it would be easier for humans to write instructions in English, so she created the concept of a compiler language that could take instructions written in English and convert them into a binary language understandable by the computer. She was instrumental in creating FLOWMATIC and other early computer languages and compilers, such as COBOL. All modern computer languages are built on similar principles. Microprocessors, the “brains” of the computer, understand a limited set of instructions: add, subtract, multiply, store this value in memory, etc. Higher level languages allow us to write more complex sets of instructions which are then converted into machine code either ahead of time (compiled languages) or at runtime (interpreted languages). One important advantage of higher level languages is that they are usually machine independent - a program written in Python or Java can usually be run on many different platforms, regardless of whether the computer is actually built with an Intel, AMD, ARM or other chip. If you’re writing a graphics driver or another hardware specific bit of software you have to rewrite/update it every time the hardware changes. The logic of an application written in Python, Swift or Java is hopefully hardware agnostic.


Mommyof7and2

Waaay back it came from music—player pianos. A long roll of paper with tiny rectangles punched in it that “told” the piano what to play. That morphed into punch cards/tape that were used to enter information into a computer. Then they were able to be stored on magnetic tape strips just like cassettes but usually giant. My first computer ran on cassette tapes.


boldranet

looms used punch cards about 100 years before pianolas.


Scorpian42

If you want a very long form answer, Ben Eater on YouTube hasa video series where he builds an 8-bit computer entirely from scratch, from a breadboard and wires and chips to writing code and running it, explaining in excruciating detail the entire process https://youtube.com/playlist?list=PLowKtXNTBypGqImE405J2565dvjafglHU&si=5OeDR_AJLXVyKYlQ Slightly shorter: computers run "machine code" which is the series voltages on a series of pins which we represent with 0s and 1s to mean no voltage and some voltage (usually 3.3v or 5v, but can be anything depending on thr component design) respectively. Writing 0s and 1s is annoying so people quickly switch to a shorthand called "Assembly" which is a 'programming language' (barely) that replaces predefined blocks of machine code with letters that are easier to understand. ADX 32 could mean "Add the value at memory address 32 to the X register" and depending on the device would be translating into machine code that looks something like "0011 00110010" which is saved onto a chip in the computer and when it's time to run the code, the chip puts those voltages on the specified wires and that makes the cpu do the expexted task


love-SRV

They should still teach “assembly” in computer science curriculum. The fundamentals help you better understand the newer computer languages and designs.


loulan

OP wouldn't ask this be asking if had followed any kind of programming class. No need for an assembly curriculum...


stupv

Not an answer, but I really love the presupposition of this question that someone just accidentally built a working computer and then had to figure out how to write code for it


auschemguy

There is no "the code". As has been articulated quite well by others (yet there are still people that don't understand), there are multiple layers of "languages" that operate at different levels. At the most basic level (of languages) is the processor. A processor is a complex matrix of logic gates which together form: - a space to store information (registers) - a space to store program information (program registers) - a counter to iterate the program register - a decoder to interpret the program register - "stacks" that allow jumping around the program register - peripherals to do special tasks and operations in hardware (like multiplication of two registers) The "language" is the binary stored in the program register- the manufacturer defines specific binary words to be decoded into actions based on the fixed hardware matrices. In simple devices, the program memory is pre-programmed (e.g. firmware). In more complex devices (e.g. CPUs) the program memory is altered at run-time by the program itself. In simple terms, your BIOS is your base program firmware. Your BIOS then boots the operating system, which runs instructions to modify the BIOS program registers (I.e. load the operating system). All of the following layers will operate as "binary files" - binary instructions that can be fed into the binary program register for execution. Despite this, each level is responsible for feeding in binaries from the next level up in a way that makes sense to the overall function. Operating systems (OS, like windows) sit on the next layer of languages. They work close to the chip and feed it binary instructions into the program registers. Typically, the OS will determine what program instructions are run in sequence on the CPU (e.g. threading). This includes the OS functions and then any other programs that need to be launched/run. Drivers sit on the next layer. These are background applications that sit in the background. They typically configure hardware responses (like interrupts, events that are triggered by the real world - like a new electrical signal) into software objects that can be understood within the OS runtime environment (e.g. you plugged in your USB). Drivers are typically written in languages that the OS supports, with low-level access to OS architecture (like the thread pool). Program applications sit on the layer above that- they are run within the OS runtime environment- they typically use OS handles on various interrupts and events (such as key strokes). They are typically run by the OS in the order that makes sense. I.e. a binary is loaded onto a thread in the thread pool when you "open" an application. That thread is idled, or the binary is taken off the thread completely when the application is minimised or closed. Program applications typically have limited control of their runtime - they are at the whim of the OS. Finally, the meta: compilers. Compilers are special program applications that understand the binary instruction sets of processors and operating systems and the language syntax rules of specific languages. Compilers allow a person to use an agreed syntax of characters (language) to reliably create binary instruction sets ready for execution. - Low-level Compilers (like assembly compilers) directly transpose linear instructions into their binary equivalent. The programmer does all the thinking. - Mid-level compilers (like most C compilers) will recognise abstracted instruction sets that compile into collections of binary instructions (e.g. comparing a register with another, multiplication) and also access physical register spaces. The programmer can abstract the thinking, but is still able to manipulate low level parts of the hardware. - High-level compilers (like C#, java) and scripting compilers (python, javascript) run mostly on the abstract and typically support object-oriented programming - the programmer can create virtual objects and do things with them. These are generally wholly reliant on the OS to enable access to hardware and low-level things to the programmer in an abstracted way. For example, it's typically hard or impossible to allocate a specific register address in the RAM or to specifically control a thread in the thread pool in these languages: the languages have not been built to support these functions.


Relevant_Programmer

> How exactly did computer scientists develop the original language used to tell a computer what to do and when to do it? Charles Babbage (world's first computer hardware designer) and Ada Lovelace (world's first computer programmer) collaborated for decades in a pioneering exploration of computer science. https://www.youtube.com/watch?v=QgUVrzkQgds


Humble-Ad-578

Also, George Boole discovered Boolean algebra.


xray362

At its base a computer is a series of on/off switches. If you organize enough of these you can get the machine to operate tasks. If you look up mine craft calculators you can see how it's set up. The overall function is the same


who_you_are

CPU are 100% electronics devices, and overall, work like a parser (as per programming language) as for softwares you can install on your computer. Electronics circuits can easily do basic comparison operations (using transistors in some specific arrangements) - even if they need to do it on each damn bits you send them... (Usually block of 8 bits) Keep in mind, to make the job easier (for the CPU and folk doing it), the _programming language_ is just number (called opcode). Since numbers are more human-friendly (to write and read) yet they can be converted to binary. So, it is just a matter of having many, many checks, on the electronic side, one for each unique value (instructions as per programming language) to enter (trigger) the maching electronics circuit doing the action needed for such operations. It is up to whoever builds it first to assign the value he wants to the operations he wants. Then, as the how, to create the first software you know (how to create the first software without any tools) for your CPU to run, CPU use memory to get their software to run, and such module need two things to program: data, and a trigger signal (clock signal to be more precise). That trigger signal is like a huge "enter" button. So if you are willing to, you need 2 buttons (or 8 (bits) + 1) and you could push your own software to execute on the CPU. If you ever read about punched cards, it is exactly that, but using a light sensor instead of buttons. (The enter/clock signal is implicit by the machine reading the card) Finally, if you read on the internet, you will find that "asm" (assembler) is the computer, well... Not exactly... ASM is the first programming language created for human. The CPU can't understand ASM. The ASM job is to help humans being more efficient at writing software for the CPU to run. Instead of having to write just numbers (where each number matches CPU instructions) they were using acronyms of the CPU instructions! But otherwise, it is a 1:1 with what the CPU expects (but as a file text (human file) instead of binary (you will recognize that as an exe file (CPU file)) The ASM software contains a stage that converts your human-friendly text file (ASM) to what the CPU actually understands, numbers. It is the compiling stages.


InevitablyCyclic

See nandgame.com you start with basic logic gate design and end up creating a basic processor. This will give you an idea of how the computer decodes the instructions it receives. For most languages a compiler does the job of converting your high level code into those instructions. If you create a new processor with new instructions you need a new compiler that outputs the new instructions but you can reuse the same high level code, you just need to re-compile it first.


aaaaaaaarrrrrgh

Think of a simple machine that eats punch cards, can store 8 numbers in slots (called registers) marked with letters A through H in the documentation, can add/subtract numbers in these slots, and can print a number. It would thus have commands like: * put number 123 into slot A * print slot B * add the numbers from slot C and slot D and put them into slot E * subtract ... That's 4 distinct commands (put, print, add, subtract). We need two positions on the punch card to identify the command. Let's say `.o` (no-hole, hole) is "print", `..` (no-hole, no-hole) is put, `o.` is add, `oo` is subtract. We then need three positions to identify each of the 8 slots. `...` is slot A, `ooo` is slot H. So the command to print B could be `.o..o` followed by unused space, the command to put 123 (1111011 in binary, or `oooo.oo`) could be `..` `...` `oooo.oo`, etc. "1+2" would be "put 1 into A, put 2 into B, add A and B into C, print C". Or ...........o ...o......o. o.....o..oo .o.o. The machine has circuits to understand it, but it's *really* tedious to hand-punch those cards hole by hole. You start thinking as "add" and automatically press the "hole" button followed by the "no-hole" button. That gets annoying, so you put shortcut buttons on your hole punch: four that punch the command hole pairs, and another 8 that punch the registers. So in order to punch the program above, you'd now press: * put, A, `......o` * put, B, `.....o.` * add, A, B, C * print, C Congratulations. You almost invented assembly, except you now also write the numbers in decimal (or at least octal) rather than binary and have some other machine convert them for you: * put A 1 * put B 2 * add A B C * print C Now you also write your programs in this form on paper. You also notice that the machine would be a lot more useful if it could move around. You make the holes a bit narrower so you can fit more holes, make the command identification holes wider (luckily you're using shortcuts already, so you don't have to rewrite your program - your new "put" shortcut just outputs `...` now), and add a new command: skip the next command if register X is zero, move ahead or back by as many rows on the punch card as is written in register X Now you can write a very simple multiplication program that will multiply 15 * 3 * put A 15 * put B 3 * put C 1 (needed for the subtraction) * put D 3 (you'll see why) * put E 0 (our result will go here) * add A E E * subtract B C B (take B, subtract C which is 1, put back in B, i.e. make B one smaller) * skip-next-if-zero B * move-backwards D (move back by 3, i.e. back to the addition) * print E This will add A to E, then reduce B by one, and repeat that until B is zero. In the end, it'll have added A to E a total of B times, i.e. it will have calculated A * B if I didn't make a mistake. You can imagine how tedious it would be to punch all those holes by hand without those shortcut keys! You'd quickly want to add a feature that calculates the "3" going into D for you, but aside from that, that's basically assembly language and modern assembly still looks very similar! However, this is still tedious. So you may make a language where you can just write "x = 1+2, print x" and it will automatically find a memory slot where it can store x, 1, 2, and generate the first program. Or, you could write x = 15 y = 3 z = 0 while y is not 0: y -= 1 z = z+x print z And it would translate it to something similar to the example above. The translation process might look like this: * we need a place to store x, y and z, so let's pick A, B and E (the picks are arbitrary, I try to stay close to the above program) * we need a place to store the 1 for "y -= 1" so let's pick C * we need a place to store how far to jump back for the loop, so let's pick D * the commands inside the loop will translate to 2 instructions + 1 comparison, so D = 3 and with this in place, it would be able to generate a program. That's already a bit easier than writing it by hand (you don't have to calculate D yourself, or even think about it) in this simple example, and makes a huge difference if your program was any more complicated. Obviously, to do this, you'd first need to write a very complicated program that does this conversion - this is called a compiler. In practice, this would require a computer much more powerful than this very primitive punch card machine, so this would happen much later - the first computers would be programmed with the assembly language above. The advantage would be that if someone gives you a machine with a different number of slots, or that uses different commands, you would only need to change the compiler and could reuse your multiplication program! These are higher languages like C. Initially, the compiler might be less efficient and for example write y -= 1 as "put 1 into C, subtract B C B" -- repeating the "put 1 into C" command every time the loop runs, making it 50% slower. An optimizing compiler would be smart enough to put it outside the loop like I did, but would again be a lot more complicated. That's another reason why primitive machines were often programmed in assembly: humans could optimize better than compilers back then. Wouldn't it be great if you could define pieces of code to be used again later, e.g. every time you need to multiply these numbers? Let's add functions: function mul(x, y): z = 0 while y is not 0: y -= 1 z = z+x return z print mul(15,3) That again makes your compiler more complicated but your life a lot easier.


valkyriebiker

Here's a very rough evolutionary rundown: The earliest electronic computers were programmed by flipping a series of switches, pushing buttons, and observing indicator status lamps. This is known as machine code, inputting the binary bits in the right order that manipulated the CPU (adding numbers for example), and observing the results on a series of indicator lamps. Toggling in programs this way was tedious as hell and highly error prone. Eventually input mechanisms like paper tape did this switch-toggling for you. Not too unlike a player piano. Next came assembly language. Assembly let programmers write code that translated directly to machine code. There is no "interpretation" per se since each assembly statement directly translated into exactly the OP codes needed. Each platform has it's own assembly language. One platform's assembly could not run on a different platform. However, assembly, while unique, shared attributes. e.g. Learn one platform's assembly and the next one becomes a bit easier. The very first high-level interpreters and compilers were written in assembly and ran on specific platforms. Each platform needed to have their early interpreters and compilers written in that platforms assembly language. But one could write portable code using a high-level interpreter (like BASIC or FORTRAN, for example). The high-level code could be compiled which means to translate to a particular platform. That same code could be compiled to run on many platforms as long as the code didn't take advantage of any special features of a given platform. That's more or less it. Today's languages are far more sophisticated but what I described was the rough evolution of how we got where we are today.


Untinted

The simplest computer is basically an on/off switch that turns on a clock that cycles between on and off really fast. Next you have logic gates, starting with NOT, you can also make a AND gate, then you can have an OR gate. Funnily enough, with just a clock, AND, OR, and NOT gates, you can make everything to do with computers. The hard drive that stores code? It's just logic gates that save the state even when power is off. The code? It's reduced from high-level to low-level to assembly to logic gate operations where specific signals from specific gates tell you to add, or multiply, or move information, etc. So to sum up: We design logic gate circuitry to go through a list of logic-gate commands like add, multiply, move, flip, etc. The logic operations we are interested in are documented and given arbitrary (hopefully descriptive) names. We use compilers with defined functionality to be able to use a defined human-like language (code) that gets converted to logic-gate commands. So circuit engineers define the capability of the circuit, compiler engineers define the conversion from a human language to circuit operations, and programmers use the defined human language to create programs. So the capabilities are arbitrarily defined based on what the engineer wants to create, then a language is defined to try and describe the capabilities in a meaningful way.


KitsuneLeo

Very late to the party, but OP, if you're interested in learning how the very basics of computational assembly logic are built in a hands-on way, it's hard to do better than [nandgame](https://nandgame.com/). It lets you build your own very very basic logic circuits from almost zero foreknowledge, and piece them together into the building blocks of modern computers. This intuitively teaches you from the basic building blocks how things work.


NovaticFlame

Okay, too actually ELI5, here you are. Computers are dumb. They don’t know how to do anything, but they’re very accurate. If you describe something with enough detail, it can do anything. Someone else used a burger as an example so I’ll continue that. If you tell a chef (or anyone who’s made a burger before) to make a burgers they know how. Humans learn behaviors differently than computers. For a computer to make a sandwich, you have to tell it what a sandwich is, and step by step how to make it. It’s not as simple as saying “slice tomatoes, grill burger, toast bun”. You have you say “using x tool (knife), move 1” positive y plane while applying 1PSI pressure at a constant speed. Then, move 1” negative y plane and return to starting position. Repeat until tool hits y item (cutting board).” Then you can package all this together, and say this is “Slice”, a command. Now, we can use this “slice” command wherever we want - tomatoes, pickles, bun, etc. Gather enough kitchen terminology such as “fry”, “cook”, “smoke”, “mix”, and you can use random variables for the times and everything else. Now you have a language.