The Amiga used a CPU rated for 8 megahertz, but clocked at 7.14 megahertz. What was the reason for this number? I remember it was something to do with a multiple of the frequency of the video circuitry, but I forget the details.

share|improve this question

I think you meant 7.15909 MHz.

7.15909 MHz is twice the NTSC color burst frequency (3.579545 MHz). The NTSC color burst frequency is 455/2 times the line rate, the line rate is 262.5 times the field rate, and the field rate is 60 * 1000 / 1001 (59.94 Hz).

See also https://www.repairfaq.org/samnew/tvfaq/tvwinsswf.htm

On PAL systems that have different field rates, line rates, etc. the CPU is clocked at 7.09379 MHz.

share|improve this answer
3  
I believe you meant 525/2 not 455. – cbmeeks 9 hours ago
    
But why tie the CPU to the video display refresh rate when the two should be unrelated? I'm guessing this is to ensure a consistent HBlank/VBlank interval timings for games, but the Amiga was meant for creative software - so why? – Dai 8 hours ago
    
In general it's easier, cheaper and more reliable to tie everything to a single master clock than to implement multiple clock sources and safe clock domain crossings. – Peter Green 7 hours ago
    
The most important acronym for older home computers was "BOM". And you had to keep it small. – jdv 5 hours ago
    
I remember there was a program that could put the PAL Amiga into NTSC mode and make it run slightly faster at the cost of losing some lines of resolution. – Matthew Lock 4 hours ago

When color video was introduced in the USA, the horizontal scan rate was set as precisely 15750 * (1000/1001)Hz, i.e. roughly 3579545.4545Hz and the color sub-carrier frequency (also called the chroma clock) was defined as 227.5 times the horizontal scan rate. Many computers of that era use a multiple of the chroma clock as the pixel clock (the Amiga, for example, uses 4x chroma, or 14318181.818Hz).

On systems where video generation shares the a memory bus with everything else, it's generally necessary that a fixed relationship exist between the video frequency and the CPU clock. In the case of the Amiga, the system clock is 1/2 the dot clock (2x chroma), which is a bit faster than some earlier machines, though the 68000 doesn't do as much each clock cycle as some other processors.

Some other typical machines:

  • Atari 2600: dot clock=chroma; CPU clock=chroma/3
  • Atari 800: dot clock=chroma*2; CPU clock=chroma/2
  • Apple II: dot clock=up to chroma*4; CPU clock=chroma*2/7
  • Commodore VIC-20: dot clock=up to chroma*8/7; CPU clock=chroma*2/7
  • Commodore 64: dot clock=up to chroma*16/7; CPU clock=chroma*2/7

The IBM PC had video memory that was separate from the main memory, and was designed to allow the video clock to be independent of the CPU clock. Nonetheless, the original PC with a CGA card used a dot clock of chroma*4, and a CPU clock of chroma*4/3.

share|improve this answer

The architecture of most "color computers" of the 70s-80s was very tightly built around the NTSC color video standard.

Almost all of them had a 14.31818 MhZ crystal. Note that this is four times the 3.579545 MhZ frequency of the NTSC color standard, which was called a "color clock". They divided that crystal frequency to derive their actual clock frequency. For instance

  • Apple II was 1.023 MhZ (1/14 of crystal, 3.5 color clocks per CPU cycle) and used a 1 MhZ rated CPU.
  • Atari 2600 VCS was 1.19 MhZ (1/12 of crystal, 3 color clocks per CPU cycle).
  • Atari 400/800 was 1.79 MhZ, (1/8 of crystal, 2 color clocks per CPU cycle) and used a 2 MhZ rated CPU.
  • Even the IBM PC used this same crystal that everyone else was using, dividing it by 3 for CPU clock (4.77MhZ). (But keep in mind memory clock is 1/4 of that, so memory throughput was crystal/12, the same as the Atari 2600 - ha!) Why choose a multiple? Despite staggering chip-fab capability, the IBM PC team was pathological about using off-the-shelf designs and components. And this "kept the door open" to a future "color computer" design with shared video RAM; which came to fruition as the IBM PCjr.

Why didn't these color-computers just operate asynchronously and run the CPU at max spec, while the video system operated on color-clock? Because memory was at a premium in those days, so they used memory-mapped video. This was dual-ported in the simplest possible scheme**, which required memory clocks be in lock sync with the video system. The Apple II even used the video system to accomplish dynamic memory refresh.

So you see. The Amiga, the last of the machines built with the "color computer" mindset (heh, speaking of another), deliberately chose a CPU speed again divided down from that same familiar crystal, and again lockstepped to the NTSC color clock. This allowed them to leverage all the color-computer design which had come before, rather than having to reinvent the wheel.

It's difficult to imagine, in this day and age of 4k video, gigabytes of video RAM, and VPUs that can mine Bitcoin faster than the CPU... that we were so pious in our worship of the NTSC standard. No longer needing the thing, it was kind of a lousy standard.


** To digress on pre-Amiga-age dual-porting: On the Apple II, the video system got every other RAM cycle, and on the Atari, the far more sophisticated Antic system interrupted the CPU to take the memory cycles that video needed, and it also took care of video refresh, which allowed linear mapping of video instead of the goofy Apple mapping (this mapping provided the intended side effect of dynamic memory refresh, a genius double-use which was Wozniak's stock in trade.) This meant available CPU power varied between display modes and whether you were in vertical blank. In a lot of Atari games, the CPU often "rode the raster", spending its very limited CPU cycles to cue up the changes to colors or sprites that would happen on the next scan line, wait for horizontal-blank, execute changes and cue up the next. All the computational work of playing the game happened after it got to a low-attention area like the bottom scoreboard, and the vertical blank period. (a significant amount of time when the raster was off the visible screen). All this was coded in assembler of course, with a great deal of painstaking cycle-counting to make sure the operations could happen in the requisite time.

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.