More

    Why the end of Optane is bad news for the entire IT world • The Register

    Analysis Intel is ending its Optane lineup of persistent memory, and that’s more of a disaster for the industry than it seems on the surface.

    The influence of ideas from the late 1960s and early 1970s is now so widespread that almost no one can imagine anything else, and the best ideas of the next generation are mostly forgotten. forget.

    Optane presented a radical, transformative technology but because of this legacy, technical debt, very few people in the industry realize how radical Optane is. And so it bombed.

    To get to the heart of the matter, step back for a long moment and ask, What is the main function of the computer file?

    The first computers did not have a file system. The giant machines of the 1940s and 1950s, built from tens of thousands of thermoelectric valves, had only a few words in memory. At first, programs were entered by physically connecting them to the computer by hand: only the data was in memory. The program ran and printed out some results.

    As capacity increases, we come von Neumann architecture where the computer program is stored along with the data in the same memory. In some early machines, that “memory” was magnetic memory: a rotating drum.

    To get it into memory, it had to be read out onto paper: punched cards, or paper tape. When the computer’s memory is large enough to store many programs at once, the operating system appears: the program that manages other programs.

    However, there is still no file system. There’s RAM and there’s I/O: printers, terminals, card readers, etc., but all memory that is directly accessible to the computer is memory. In the 1960s, memory usually means magnetic core storage, there’s a big advantage that’s now sometimes forgotten: When you shut down your computer, whatever’s in core storage is still there. Turn the computer back on and its last program is still there.

    Around this time, the first hard drives began to appear: expensive, relatively slow, but very large compared to working memory. Early operating systems were given another job: that massive secondary memory management problem. Index its contents, find the desired parts and load them into working memory.

    Two levels of storage

    When the operating system starts to manage the drive, a difference appears: elementary school and secondary storage. Both are directly accessible to the computer, not being loaded and unloaded by the operator like a paper tape or punched deck. Main memory appears right in the memory map of the processor, and every individual word can be read or written directly.

    Secondary memory is a much larger, slower pool, which is not directly visible to the processor and can only be accessed by requesting or sending the entire block to another device, a disk controller. , fetches the contents of specified blocks from a large storage pool, or places them in that pool.

    This division continued in eight-bit microcomputers of the 1970s and 1980s. The author specifically remembers attaching a ZX Microdrive to his 48K ZX Spectrum. Suddenly, my Spectrum has secondary memory. Spectrum’s Z80 CPU has a 64kB memory map, of which a quarter is ROM. Each Microdrive cartridge, although only 100kB or so, can store approx twice all usable memory of the machine. So there must be a degree of redirection: the entire contents of the cartridge cannot be loaded into memory.

    It won’t fit. So the cartridges have an index, and then named blocks that contain the BASIC code, or the machine code, or the screen image or the data file.

    Since microcomputers we still call primary memory “RAM” and we still call secondary storage “disk” or “drive”, although in many modern end-user computers, all just different types of electronic devices with no moving parts or separate vehicles.

    You start your computer by loading the operating system from “disk” into RAM. Then when you want to use a program, the operating system loads it from “disk” to RAM, and then that program can load some data from disk into RAM. Even if it’s a Chromebook and it doesn’t have any other local apps, its only app will load data from another computer over the internet, which will load it from disk into RAM and then sent to the laptop.

    Since UNIX was first written in 1969, this has become a mantra: “Everything is a file.” Unix-like operating systems use the file system for all things that aren’t files: access to the machine is governed by the metadata on the file, I/O devices are accessed as if they were files, you can play sounds by “copying” them to an audio device, etc. Since UNIX V8 in 1984, there was even a fake file system, called /procDisplays information about running system memory and processes by creating pretend file which users and programs can read and, in some cases, write.

    The file is a powerful metaphor for types, which proved to be unimaginably versatile in 1969, when Unix was written on a minicomputer with a maximum memory of 64k words and no sound. bar, graphics or network. The files are now everywhere.

    But files, and the file system, is just a crutch.

    The concept of “computer files” was invented because primary memory was too small and secondary memory was too expensive, too large, and too slow. The only way to fit millions of words into a 1960s mainframe was a drive the size of a filing cabinet and too much storage to fit on the computer’s memory map.

    So instead, the mainframe companies designed the disk controller and built some kind of database into the operating system. For example, imagine, a payroll program, which might be only a few thousand words in size, could process a file for tens of thousands of employees, by doing it in small chunks: reading a row of words HR file and one row from salary file, calculate the result and write one row to payslip file, then repeat. The operating system checks the indexes and converts it to instructions to the disk controller: “here, fetch block 47 track 52, head 12, sector 34, and block 57 from track 4, head 7, sector 65 …now, write 74.32 to this block…”

    SSDs appeared in the 1990s, and by the first decade of this century, they were affordable. SSDs replace magnetic storage with electronic storage, but it’s still secondary storage. SSDs pretend to be drives: the computer talks to the disk controller and sends and receives sectors, and the drive converts them and shuffles around storage blocks that can only be erased in chunks, usually megabytes or higher, for hard drive emulation -disk-style function writes 512 byte sectors.

    The point is, flash memory has to be accessed this way. It is too slow to be mapped directly into the computer’s memory and cannot rewrite each flash byte. To modify a byte in a flash block, the rest of the contents of the entire block must be copied elsewhere, and then the entire block is erased. This is not how a computer’s memory controller works.

    The future is here… but it’s gone

    Optane has made it possible to eliminate that. Like the core store, it is active memory: elementary school warehouse. Optane sets are as big and cheap as drives. It ships in a size range of hundreds of gigabytes, the same size as a modest SSD, but it can be inserted directly into the motherboard’s DIMM slots. Each byte appears right there in the processor’s memory map, and each byte can be directly rewritten. Don’t shuffle around blocks to clear them, like flash. And it supports millions of write cycles, instead of tens of thousands.

    Many hundreds of gigs, even terabytes, of non-volatile dynamic storage that is thousands of times faster and thousands of times more powerful than flash memory. Not the secondary memory on the other side of the disk controller, but right there in the memory map.

    Cannot record infinitely, no. So your computer also needs some RAM to hold variables and data that change quickly. But instead of “loading” programs from “disk” into “RAM” every time you want to use them, a program loads once and then it stays there forever in memory, regardless of truncation. power or not, regardless of whether you turn off the computer for a week’s vacation. Turn it back on and all your apps are still right there in memory.

    No need to install the operating system anymore, no more booting. No more apps. The operating system is always in memory, and so are your applications. And if you have a terabyte or two of non-volatile memory in your computer, what do you need an SSD for? All are just memories. A small part can record fast and infinitely, but its contents will disappear when the power goes out. The remaining 95% keeps its content forever.

    Sure, if the box is a server, you probably have several spinning disks that can manage petabytes of data. The data center needs it, but very Few personal computers do.

    Linux, of course, already supports this. This particular vulture has documented how to use it on a well-known enterprise distribution. But Linux is Linux, everything has to be a file, so it supports it by partitioning it and formatting it with a file system. Use primary memory to emulate secondary memory, in software.

    No mainstream operating system currently understands the concept of a computer only there is main memory, no secondary memory at all, but it is split between a small volatile part and a large part non-volatile. It’s hard to describe it to people familiar with how current computers work. I tried.

    How do you find a program to run if there is no directory? How do you save content, if there is nowhere to save it arrive? How do you compile the code, when there’s no way to #include file into another because there is no file and where does the resulting binary go?

    There to be ideas on how to do this. Reg wrote about one of them 13 years ago. There is also Twizzler, a research project that investigates how to make it look like a Unix system so that existing software uses it. When a lab researcher at HP invented the memristor, HP got excited and had some big plans…but it took a long time to bring a new technology to the mass market, and In the end, HP gave up.

    But Intel made it work, produced this, brought it to market… and not enough people cared, and now it’s giving up too.

    The future is here, but when viewed through the old scratched lens of 1960s minicomputer operating system design, then – if everything is a file, this Optane is just some kind of really fast drive, right?

    No, that’s not it. It was the biggest step forward since the minicomputer. But we blew it.

    Goodbye, Optane. We hardly know you. ®

    Recent Articles

    spot_img

    Featured Article

    Leave A Reply

    Please enter your comment!
    Please enter your name here

    Stay on op - Ge the daily news in your inbox