The Apple IIGS Megahertz Myth

There’s many legends in computer history. But a legend is nothing but a story. Someone tells it, someone else remembers it, and everybody passes it on. And the Apple IIGS has a legend all its own. Here, in Userlandia, we’re going to bust some megahertz myths.

[A side note before proceeding… I see all you Hacker News people filing in. I haven’t had the time recently to properly format this with the numerous source links, citations, and footnotes that exist in the video version. I’ll try to get those filled in here ASAP. For now, you might want to experience it all in the video version.]

I love the Apple IIGS. It’s the fabulous home computer you’d have to be crazy to hate. One look at its spec sheet will tell you why. The Ensoniq synthesizer chip brings 32 voices of polyphonic power to the desktop. Apple’s Video Graphics Controller paints beautiful on-screen pictures from a palette of thousands of colors. Seven slots and seven ports provide plenty of potential for powerful peripherals. These ingredients make a great recipe for a succulent home computer. But you can’t forget the most central ingredient: the central processing unit. It’s a GTE 65SC816 clocked at 2.8 MHz—about 2.72 times faster than an Apple IIe. When the IIGS launched in September 1986 its contemporaries were systems like the Atari 1040ST, the Commodore Amiga 1000, and of course Apple’s own Macintosh Plus. These machines all sported a Motorola 68000 clocked between 7 and 8 MHz. If I know anything about which number is bigger than the other number, I’d say that Motorola’s CPU is faster.

“Now hold on there,” you say! “Megahertz is just the clock speed of the chip—it says nothing about how many instructions are actually executed during those cycles, let alone the time spent reading and writing to RAM!” And you know what, that’s true! The Apple II and Commodore 64 with their 6502 and 6510 CPUs clocked at 1 MHz could trade blows with Z80 powered computers running at three times the clock speed. And the IIGS had the 6502’s 16-bit descendant: the 65C816. Steve Wozniak thought Western Design Center had something special with that chip. In a famous interview in the January 1985 issue of Byte magazine, Woz said,

“[the 65816] should be available soon in an 8 MHz version that will beat the pants off the 68000 in most applications, and in graphics applications it comes pretty close.” End quote. That’s already high praise, but he continues further: “An 8 MHz 65816 is about equivalent to a 16 MHz 68000 in speed, and a 16 MHz 68000 doesn’t exist.”

If you read this in January of 1985 you’d have little reason to doubt Woz’s words. He built the Apple I in a bedroom from a box of scraps and when given real resources followed it up with the Apple II. Even when faced with doubters, he’s got the confidence that comes from engineering the impossible.

“Some of the Macintosh people might disagree with me, but there are ways around most of the problems they see.”

But that “should” in “should be available” was doing a lot of work. Eighteen months later when the IIGS finally shipped, there was no 8 MHz ‘816. It was as nonexistent as Woz’s imaginary 16MHz 68000. 8MHz chips were barely available three years later. What happened? Woz promised us 8 MHz of 68000-crushing glory!

If you poll a random vintage computer user they might posit the idea that Apple held the IIGS’ processor speed back during its development so it wouldn’t compete with the Macintosh. There’s an unsourced claim on Wikipedia that limiting the CPU speed to 2.8MHz was a deliberate decision followed by a note—which is sourced, thank you very much—that the original CPU was certified to run at 4MHz. And that’s true—there’s many IIGSes that have CPUs labeled for 4MHz. This idea’s made its way through various newsgroups and webpages for decades, so it’s no surprise that it made its way into a Wiki article too.

But this theory never sat well with me. People making the claim that Apple restrained the IIGS’ CPU speed for marketing purposes rarely provide sources to back it up. I understand their logic—Apple spent the majority of its marketing and monetary might making the Macintosh the machine of the future. Because the Mac was Steve Jobs’ baby, you end up with declarations like “Steve Jobs hobbled the IIGS so it wouldn’t compete with the Mac.” It’s a common take, especially because it plays into a lot of people’s dislike of Jobs. But there’s one major problem with it: all of Steve Jobs’ power at Apple was stripped away in May of 1985 after the months of executive turmoil that led to the company's first quarterly loss. The IIGS didn't launch until 16 months later.

So why were IIGSes with chips rated at 4 MHz not running them at that speed? Why 2.8 MHz? Isn't that… weirdly specific? Did an 8 MHz machine really get put on ice due to executive meddling? To solve these mysteries I descended into the depths of Usenet, usergroup newsletters, magazines, and interviews. My journey took me through a world of development Hell, problematic yields, and CPU cycle quirks. And this walk down the Apple chip road starts with the wonderful wizard named Woz.

The Apple IIx

It’s the summer of 1983 and Steve Wozniak has just wrapped up a two year leave of absence from Apple Computer. It all started with a six week stint in the hospital after crashing his Beechcraft Bonanza airplane on February 7, 1981. Amnesia from a hard concussion short-circuited Wozniak’s short-term memory. After his recovery he took a leave from Apple to go back to UC Berkeley and finish his degree in computer science and electrical engineering… and run a few rock festivals. By June of 1983 Woz felt he was ready to return to work, and asked Dave Paterson, the head of Apple II engineering, for a job—but this time, in the trenches.

His timing was excellent. Even though the Lisa was taking headlines and the Macintosh was shaking up R&D, the Apple II was making all the money. Brisk sales of the IIe along with the imminent launch of the IIc meant the Apple II division was busier than ever even if they weren’t getting all the glory. And while Steve Jobs was heralding the Mac as the Next Big Thing, designing a next-generation Apple II as a contingency plan was just good business.

At the heart of this proposed machine was a brand new CPU. Bill Mensch’s Western Design Center was busy developing the 65816, a 16-bit update to the venerable 6502 architecture. This chip would bring 16-bit computing to the Apple II while promising backwards compatibility. Users wouldn’t lose their investment in applications, add-in cards, or accessories. Alongside the new CPU was a special coprocessor slot that allowed the user to install a board with a 68000 or 8088. The goal was to build a bridge between the 8- and 16-bit world, so the project had code names like Brooklyn and Golden Gate.

This project would be known publicly as the IIx thanks to Wozniak discussing it on CompuServe or at user groups. But as late ’83 rolled into early ’84 the IIx project stumbled over multiple roadblocks. The coprocessor slot added layers of complexity that distracted from the mission of architecting a new Apple II. But a major complication was the 65816 itself. Apple expected engineering samples in November 1983, but didn’t actually get them until February 1984. What’s worse, those late chips were buggy, unstable, and ultimately unusable. WDC delivered a second batch of chips a few weeks later, but they were no more reliable than the first.

Even if Apple abandoned the coprocessor slot, the project couldn’t go forward without a CPU, and Apple cancelled the IIx project in March of 1984. Now before you aim your ire at Steve Jobs, A+ Magazine, in their IIGS development history, says it was the team leads who suggested canning the troubled machine. With no managerial appetite for a next-generation machine, the Apple II team pivoted from a moonshot to something more achievable. Dan Hillman and Jay Rickard started a project to consolidate most of the discrete chips of an Apple II into a single chip called the Mega II. When they finished the design six months later they weren’t quite sure what to do with it. Would they reduce the cost of the IIe or reduce the size of the IIc?

Imagine their surprise when Woz suggested a second shot at a 16-bit Apple II. The conditions seemed right to give it another go. Less expensive 16-bit computers like the Atari ST were looming on the horizon and the Mac’s hot start was slowing down. By October 1984 Apple finally had a good supply of working 65816 CPUs to design a system. And the Mega II would free up a lot of board space to add new graphics and sound chips. But just as important as chips and specs was a new, focused mission statement. This computer would be an Apple II by Apple II fans for Apple II fans. Woz, Hillman, Rickard, Harvey Leitman, and Lee Collings spent the rest of 1984 hammering out the specifications and solving hard problems like how to speed up memory access without breaking existing software.

Now we’re finally back to that Woz interview I quoted earlier. Byte published it in two parts across the December ’84 and January ’85 issues, and based on the average time to press I reckon it took place in October 1984. By this point the IIx is dead and buried and he’s talking about the new 16-bit Apple II, now codenamed Phoenix. His excitement about an 8MHz ‘816 is palpable, but, again, Woz was careful to say it “should be available soon.” Woz left Apple in February 1985 when the ink for this interview was barely dry. He had a dramatic fight with John Sculley after the Apple II was snubbed at the annual Apple shareholder’s meeting in January 1985. Apple II sales supplied seventy percent of Apple’s revenue in 1984 and Woz’s Apple II compatriots felt they weren’t getting their due. Steve Jobs may not have dialed back the IIGS’ clock speed, but he did shovel endless piles of money towards his pet projects at the expense of the Apple II. Even if Woz had stuck around the 8 MHz ‘816 of his dreams was years away. The IIGS wouldn’t sniff 8 MHz until Applied Engineering released the 7 MHz TransWarp GS accelerator four years later in 1989.

The Need for Speed

If you go looking online for photos of Apple IIGS logic boards, there’s a decent chance you’ll see a 4MHz GTE G65SC816 CPU. Most IIGSes had 4 or 3 MHz CPUs running at 2.8 MHz regardless of the chip’s rated speed. Why?

First, we must understand where that clock signal comes from. The IIGS, like many computers of its era, derives its base clock from a crystal oscillator. The one in the IIGS vibrates 28.636 million times per second, or megahertz. The VGC divides that 28.636 megahertz in half, and 14.318 MHz is then supplied along with a stretch signal to other parts of the system. I bet you've already noticed that these frequencies are multiples of the NTSC colorburst frequency of 3.58MHz. Keep that in mind, it’ll be relevant later.

This 14.318 MHz clock travels to a special ASIC which—depending on your board revision—is called the Fast Processor Interface or Control Your Apple chip. One of its responsibilities is dynamically controlling the CPU’s clock speed. The IIGS normally runs in Fast mode at 2.8 MHz, but the user can switch to Normal mode in the Control Panel which reduces the speed to 1.023 MHz. It’s like the turbo switch on PCs, except controlled by the FPI. This lets the IIGS run speed-sensitive software at the same speed of an original Apple II. But even in fast mode there are times the CPU needs to slow down to access other parts of the system.

The CPU, ROM, and Fast RAM are on the 2.8 MHz, or fast side, while the Mega II chip, slots, sound, video, and so on are on the 1MHz, or slow side. When the CPU is running in fast mode and needs to talk to something on the slow side the FPI dynamically throttles the clock signal to a 1 MHz cycle time to let the CPU synchronize with the Mega II side. This is usually invisible to the user because the system still executes the majority of its cycles at the faster speed, but it means the CPU is not always executing as fast as it could. I haven’t even touched on the eight percent performance penalty from the cycles the FPI spends refreshing RAM.

There’s nothing inherent to this design that limits the system to 2.8 MHz. The Apple IIGS Hardware Reference, Wayne Lowry’s Cortland Custom IC Notes, and Daniel Kruszyna’s KansasFest 2011 presentation about the FPI lay it out clearly that the IIGS’s fast mode could support higher speeds. In principle a redesigned FPI or CYA could use a divider other than 1/5 for the clock signal. A 1/4 divider of 14.318 MHz yields a 3.58 MHz clock speed, which should be well within the capabilities of a 4 MHz chip. And once again, that “should” is doing a lot of work. So why didn’t it run at that speed?

The Birth of the 65C816

The 65C816 and IIGS are inextricably linked, and the ‘816’s story starts in 1981 when GTE’s Desmond Sheahan approached Bill Mensch and Western Design Center about designing a CPU. GTE wanted to manufacture their own chips for their key phone systems, so they licensed both a CMOS manufacturing process and a 6802 CPU design from Mitel. But Mitel’s CPU design team wasn’t up to the task, so GTE asked Mensch to step in. Mensch offered two options: a two-chip 6802 solution or a 6502-based microcontroller. Either one could be manufactured on the CMOS process GTE licensed from Mitel. Sheahan convinced the GTE brass that the one-chip solution was the way to go, and the 65C02 was born.

GTE became the first licensee of the 65C02 thanks to WDC designing their 65SC150 telephony microcontroller. Eventually WDC would license the core to other companies, like NCR, Synertek, and Rockwell. The result was WDC netting a royalty every time one of these licensees sold a standalone CPU or microcontroller with its 65C02 core. Functional 65C02 silicon was available in 1982, and the revenues from licensing deals were finally filling up WDC’s coffers. This is when—according to Mensch’s Computer History Museum oral history and other sources—Apple approached him about a 16-bit successor to the 6502.

The prospect of a big player like Apple being the launch client for a new CPU was hard to resist. Further information on the early days of the ‘816 is fairly elusive, but looking at both historic and contemporary interviews with Mensch and Woz reveals Apple’s influence on the chip’s design. One influence was compatibility. When designing the ‘816 Mensch improved the 6502 bus architecture to eliminate little weirdnesses like false read cycles on store accumulators. This consequently broke Steve Wozniak’s Disk II controller, which had been designed to rely specifically on that exact weirdness.

Now there’s two ways to solve this problem: redesign the floppy controller, or add the weirdnesses back in. Apple tried the first one; redesigning its floppy controller into a single chip called the Integrated Woz Machine. This chip was made independently for the Apple IIc to control its built-in floppy drive. Among its many improvements was eliminating the false read cycle requirement. The Apple IIx could have used an IWM, but the UniDisk drives that took advantage of it wouldn’t be out until 1985. Therefore existing Disk II interfaces and drives still had to work with the IIx. If you were to claim Apple II compatibility, but not be able to work with one of the most popular boards, well, there’d be protests lining the sidewalks of De Anza Boulevard. Other cards might also depend on specific 6502 timing behaviors. Mensch eventually honored Apple’s request that the 65C816 be fully compatible with the NMOS 6502’s timings.

Apple wouldn’t be Mensch’s only licensee for the ‘816—the core can be found in countless embedded systems and microcontrollers. Licensees could extend the core to customize it into a microcontroller or a system-on-chip, or add features specific to their needs. A great example is Ricoh, who licensed the ‘816 core and added several functions, like dedicated multiply and divide units along with special DMA functions. All these add-ons were at the request of a pretty famous customer: Nintendo. The result was the Ricoh 5A22, the CPU inside the Super NES.

One of the famous tales of the ‘816’s development is how it was laid out without computer aided design. As told by Bill Mensch, he designed the ‘816 on a card table at WDC’s humble headquarters in Mesa, Arizona. His sister Katherine did the layout on mylar sheets. Many semiconductor designers had moved on to computer aided design tools to help design and lay out their circuits. Mensch is rightly proud of this feat, but the decision to lay it out by hand with mylar and rubylith wasn’t without consequences.

Let’s circle back to that interview with Woz. GTE published data sheets in 1985 for the G65SC816 which detailed 2, 4, 6, and 8 MHz variants of the chip. Apple, as the prime customer, would’ve had these data sheets far in advance. This would be consistent with Woz’s belief that faster variants were on the way, but for purposes of designing the IIGS they had to settle for 4MHz chips. Multiple photographs of IIGS prototype logic boards with 4MHz chips are on the web, and 4MHz parts shipped in production IIGS computers. But the promise of a faster CPU is very tantalizing to a computer designer, and I’m sure it wasn’t just Woz who was excited about future faster CPUs.

But when would GTE actually produce those faster variants? That’s the question. One source is a 1986 interview with Mensch published in the Fall/Winter issue of COMPUTE!’s Apple Applications magazine. This interview took place before the IIGS announcement, likely sometime in the summer of ’86. Mensch states their average part on a 3 micron process should be 4MHz, and an upcoming 2.4 micron process would yield 6MHz parts. I’d wager that the 8 MHz parts would rely on a 2 micron process at that reduction rate. Some 6MHz parts did exist, and Bill Mensch’s Cortland-badged prototype IIGS has one. A photo of it was shown at his 2021 VCF East panel, and I will note that it’s a WDC branded chip bearing a date code of the first week of February 1987. Whether Mensch put this chip in there himself or it was a sample installed by an Apple engineer is not explained. Nor is it known if its FPI chip actually drives it at a higher speed. But this shows that faster ‘816s did exist. So what was getting in the way of these faster chips actually being used?

Yields, Bugs, and Errata

This brings us to the 65816’s next quandary: yields. This might be a surprise to you, given that the CPU is found in embedded devices worldwide. But a common thread through many of the reports I’ve read about the early days of the ‘816 is that WDC and its fabrication partners struggled to deliver 4MHz and faster chips on time, in volume, and at spec.

Mensch positioned the 4MHz chip as the ‘816’s volume product and said as much in that earlier interview with COMPUTE!.

“Our typical product is 4MHz. We sell 2MHz into the retrofit market, but our typical run-of-the-mill is 4MHz.”

But in reality the IIGS shipped with a mixture of 3 and 4 MHz parts which actually ran at 2.8 MHz in fast mode. Which brings us back to the question of why a machine designed around a 4MHz part would ship with an almost 30% haircut in clock speed. Conspiracy theories aside, could there be a technical reason why the IIGS would run slower than its CPU’s rated speed?

In the same Fall/Winter 1986 issue of COMPUTE!’s Apple Applications where Mensch talked about future plans for the ‘816, David Thornburg interviewed Steve Wozniak about his role in developing the IIGS. The subject of the 65816 came up and Woz delved into some details about its clock speed.

“Our early ideas for the computer had it running around 8MHz. Soon we found we had to back off to about 5.5MHz, and then to 2MHz for that version of the processor. In the end the product came out around 3MHz, which is a good compromise.”

This is consistent with his comments about the desire for 8MHz more than a year earlier in the Byte interview. Woz doesn’t mention what factors made them back off on the clock speed, but during my research I learned a lot about the troubles the ‘816 faced in terms of scaling and yields.

One problem was GTE’s ability to supply chips—not just to Apple, but to other customers. The IIGS would be shipping by the tens of thousands when it launched, and this necessitated a good quantity of chips on hand. Dave Haynie—yes, the Amiga’s Dave Haynie—had trouble in 1985 sourcing 4 MHz 65816s for a potential Commodore 128 successor. He posted about this experience on Usenet in March of 1990.

“At that time, they had fully specced 8MHz parts, yet in ’85, GTE (the only company actually MAKING 65816s) had all they could do to make enough 4MHz parts. Rumor is that Apple managed get enough for the IIGS by actually having a special 2.8MHz version tested.”

He further elaborates with:

“When the GS came out, the only company making '816s was GTE. The main reason I couldn't get any 4MHz ‘816s in quantity was that Apple bought them all. They could make a real deal on 2MHz parts, since the yield on 4MHz was so low, they had more of those than they knew what to do with.”

Haynie also comments in other posts about how early samples of the ‘816 were delivered at 500KHz—yes, kilohertz—and maybe that’s a clue as to why Apple was unhappy in the Apple IIx timeframe.

Yields are a common woe in semiconductor manufacturing and Haynie’s comments about yields line up with what we see in the real world. GTE’s three micron process apparently had problems producing enough 4 MHz chips in volumes to satisfy its customers. Many IIGSes have a 3 MHz G65SC816 despite this speed rating not showing up in GTE’s data sheets. My guess—I can’t find any confirmation for this, but it's what makes the most sense—is that these were chips that couldn't be binned as 4 MHz, so GTE labeled them as 3MHz. Insisting on 4MHz would have meant shipping fewer computers, and the IIGS was delayed enough as it is. While VLSI supplied some 4MHz 65C816 CPUs later in the IIGS’ life, the vast majority of CPUs found in these computers were made by GTE—or, eventually, by California Micro Devices, which purchased GTE’s Microcircuits division in September 1987 after GTE decided to get out of the business. Apple was also negotiating with NCR as a second source, but according to Mensch and others, the deal fell apart before the IIGS shipped.

But let’s say for the sake of argument that GTE was able to produce as many 4 MHz chips as Apple or anyone else wanted to buy. Based on the FPI’s clock divider mechanism and a 14.318 MHz source clock, Apple had a logical clock speed target of 3.58 MHz using a 1/4 divider. That’d still be a compromise over Woz’s dream machine, but it’d be faster than what we got. And if (or when) faster processors became available, the FPI clock divider could be adjusted for them.

Yet those faster processors weren’t materializing; at least, not in any volume. Yields were a factor, yes, but the faster speeds revealed other problems. Trying to learn more about this took me down a rabbit hole of Usenet posts, Applied Engineering documentation, AppleFest event reports, and 6502 development forums. All of them pointed to a common factor: the REP and SEP opcodes. When designing the new 16-bit native mode for the ‘816, Bill Mensch added many new opcodes to enable new features and capabilities for programmers. Two of these new opcodes—called SEP for Set Status Bits and REP for Reset Status Bits—control flag bits for the processor’s status registers. These are crucial to the dual 8/16 bit nature of the ‘816 and how it can switch between 8 and 16 bit operations on the fly.

Unfortunately these opcodes proved problematic at higher speeds. Multiple posts relay statements from WDC or programming guides that timing problems with the layout mask prevented these instructions from completing in their allowed cycle times. These problems only got worse as they tried to shrink the mask down to smaller process nodes. I’m loath to take what amounts to second and sometimes even third-hand accounts from newsgroups and forums at face value—they don't call it the Net of a Million Lies for nothing, after all. But I’m willing to believe the overall theory based on other corroborating evidence (like this WDC data sheet note from 1991). If you look at an Apple IIGS accelerator like the TransWarp GS or the ZipGSX, you’ll notice that they’re not just a CPU and some cache memory. The TransWarp GS has a bunch of support chips and gate array logic, while the ZipGSX has a full-blown ASIC on board.

The GALs for the TransWarp GS were reverse engineered long ago, and Reactive Micro lays it out plainly: GAL3 handles opcode detection and speed control. This matches up with posts relaying comments from Applied Engineering about stretching clocks to handle problematic REP and SEP opcode timings.

Analyzing these posts also reveals the temperature of the Apple II community circa 1990. Apple announced a revised IIGS at San Francisco just before AppleFest in September 1989, and the reaction from attendees was muted at best. Unfortunately there was no CPU speed increase, but base memory was increased to 1MB and the new ROM added more toolbox routines and some feature enhancements. There was someone whose reaction was anything but muted, though, and it was one William David Mensch.

According to multiple accounts of the event, during his keynote address Jean-Louis Gassée said that there would be no speed increase for the revised IIGS because of difficulties securing faster 65816s. Mensch was in attendance and was understandably displeased with this statement. He approached one of the crowd mics and said that he was the designer of the CPU and had in his hand a bag of 12 MHz ‘816s. He proclaimed that if Apple ordered the 12 MHz chips, he would ship them. Jean-Louis reportedly made a comment about wanting a reliable production run, and the two men got into a heated back-and-forth before Gassée left and Mensch was escorted out. No recordings of this keynote exist, or if they do they’re locked away in the Apple Archives at Stanford University.

The version of the story Mensch recounts at his 2021 panel at VCF East largely follows what I’ve read in contemporary reports, except with one difference. He includes an anecdote about how he got Jean-Louis’ Apple coffee cup. He mentions running into Gassée after the keynote, and says that Gassée was very upset and threatened to, quote, “kick [Mensch’s] butt.” No punches were actually thrown, and no butts were actually kicked, and the story peters out without really explaining how Mensch got the coffee cup, but this story shows just how bad Apple and WDC's relationship had become.

Now, you’re a smart viewer, so I bet you know how chickens and eggs work. A company like Apple doesn’t just buy bags of chips; they buy Frito-Lay’s entire annual output. Mensch could have samples of silicon, but WDC wasn’t the one making the chips in volume; its licensees like GTE were. If the fab (say, GTE/CMD or VLSI) can’t guarantee a production run of, say, 50,000 chips, the order doesn’t happen. The evidence in front of us—the delays during IIx development, the inability to deliver 4MHz parts at volume, and the opcode issues that prevented scaling—would certainly justify skepticism of WDC’s ability to work with a fab to deliver faster chips at volume. There were still possibilities for a faster IIGS, though, and these would play right into an Apple II fan’s belief that Apple held it back.

Accelerators, ASIC Incorporated, and Mark Twain

But let’s say you weren’t buying tens of thousands of chips like Apple was; maybe you only needed a thousand or two. Were smaller customers being taken care of? Ray Settle of the Washington Apple Pi Journal was also at the fall 1989 AppleFest, where he reported on the confrontation between Mensch and Gassée. Afterwards, he mentioned a visit with an Applied Engineering representative. Settle still hadn’t received his TransWarp GS, and the AE rep pinned the blame on unreliable 7 MHz CPUs. Another attendee report posted on Usenet by Michael Steele featured similar comments. Keep in mind that the TransWarp was first announced in 1988, didn’t ship until the fall of 1989, and struggled with speed and supply restrictions through 1990. This is further supported by A2-Central’s accelerator reviews in February 1991, where it’s mentioned that AE resorted to offering 6.25 MHz versions of the accelerator because of supply issues—and reliability issues—with 7 MHz chips. Zip Technologies also took a while to ship their 7MHz ZipGSX accelerator, which finally appeared in October 1990, almost a year after the TransWarp GS.

But we don’t have to settle for second-hand reports. Enter Applied Engineering employee John Stephen III. In an October 1989 Usenet post  he mentions the problems with getting 7MHz chips, and alludes to the REP/SEP timing issues. But the other interesting thing he mentions is that getting high speed 10 MHz ‘816s to run at their rated speeds required boosting their input voltage well beyond the standard 5 volts. This made the chips run hotter and often resulted in crashes. And I see all you overclockers in the audience nodding your heads.

Scaling problems aren’t unusual when you move to a smaller process node, and sometime a layout needs fixes—or even a complete redesign. The original mask laid out by Katherine Mensch on the WDC card table had reached its limits. Redoing the mask wouldn’t be easy or cheap, especially when higher speed ‘816s were a smaller piece of the revenue pie. Royalties from the 65C02, 65802, and slower embedded ‘816s were paying the bills. Mensch was also busy working on a 32-bit iteration of the architecture: the 65832. But this proposal never made the jump from datasheet to shipping dock.

Interestingly, this is where a new player steps in. While researching the yield problems I encountered numerous posts on comp.sys.apple2 about an “ASIC 65816.” This tripped me up at first, because there are numerous application-specific integrated circuits that contain an ‘816 core. But no, it turns out that a company named ASIC was creating a new 65816 processor using gate arrays. And they promised that this redesigned ‘816 would reach speeds of 20MHz and beyond.

Imagine my surprise when I saw the name Anthony Fadell mentioned in these posts. Could that be the same Anthony Fadell—Tony Fadell—who was in charge of the iPod? Sure enough, I found an excerpt of Tony’s book, Build, where he talks about designing a 65816 for his startup company, ASIC Incorporated! Now we’re on to something. This gave me enough clues to dig up the November/December 1989 issue of The Michigan Alumnus magazine, where an article tells the tale of the fateful summer of 1989. Fadell was an undergrad student at the University of Michigan, and he spent his summer internship designing nuclear control computers at Ronan Engineering. When he met William Hayes the two hit it off immediately. Fadell was a big Apple II fanboy and yearned for more power. Hayes knew chip design and had connections with other chip designers. The two decided that they would design a better version of the 65816 using a sea-of-gates array. Fadell borrowed $5,000 from his uncle, Hayes leveraged his connections to get cut-rate pricing on supercomputer time, and the two reverse engineered a 65816 design in six weeks. After finding a fab willing to manufacture their design, Fadell was prepared to pitch his prototype to potential patrons.

The Michigan Alumnus article is coy about who the chip was for, but it mentions Fadell flying to Texas in September 1989 to meet with a manufacturer of Apple IIGS accelerator boards. There he negotiated a purchase order and a two year contract, catapulting him to the role of CPU vendor at the age of twenty. With these clues we can deduce that Fadell’s customer was Applied Engineering. If all had gone according to plan, ASIC's chips should have started production in early 1990, and formed the basis of the next generation of TransWarp accelerators. There’s that word again—should.

Ultimately, no TransWarp GS or ZipGSX accelerators ever shipped with ASIC’s design. The chips did exist—multiple trip reports from 1990 AppleFests mention Fadell demonstrating his CPUs in TransWarp accelerators. And in 2022, Fadell posted a photo on Twitter of an ASIC 65816 which he claims would still work today—I’m guessing this is one of the 17 MHz chips used in the AppleFest demos. But posts about ASIC fizzled out after the spring of 1991, which coincides with Fadell graduating from Michigan. The ultimate fate of the chip isn’t really known—did Fadell face legal challenges from WDC? Did Applied Engineering give up on him? Or was it because—as described in an interview with WIRED—he was just too busy roaming the halls of General Magic trying to score a job with former Apple superstars Andy Hertzfeld and Bill Atkinson? My guess is the latter, since General Magic hired him later that year.

In the same tweet as the photo of the chip, Fadell said that he, quote: “sold some to Apple for a new Apple IIGS revision before they canceled it!” Many IIGS superfans have heard tell of the cancelled final revision of the IIGS code named Mark Twain. Featuring an internal floppy drive, a hard disk, and a redesigned logic board the Mark Twain is what many thought the ROM 03 IIGS should have been. It’s entirely probable that Apple fitted some prototypes with faster CPUs. But when media outlets like InCider/A+ magazine were given a top secret demo a mere month before its cancellation, the clock speed was still the same. And the few Mark Twain prototypes that escaped Apple’s development dungeons were equipped with the usual 4MHz chip. This is where the business side and the technical side butt heads.

The Mark Twain was under development in late 1990 into 1991 and then mysteriously no longer under development as of June 1991. Rumors of Apple killing the entire Apple II line hung over the product like a dark cloud, and the dev team had hoped to prove they were greatly exaggerated. Despite John Sculley’s statements promising Apple’s full support for the over five million strong installed base, the lack of marketing and technical improvements to the Apple II over the years meant his words rang hollow. Apple had just introduced the Macintosh LC as their low-cost color Mac, and IBM compatible PCs were getting cheaper and more capable. If Apple released the Mark Twain without any CPU speed boosts, it would’ve appealed mostly to the Apple II’s cost-conscious institutional education market. Would existing IIGS owners buy one instead of just getting an accelerator card for a fraction of the price? And how many new users would it bring to the platform? The Mark Twain would’ve been like the Amiga 1200: a decent improvement, but ultimately too little and too late. The March 1991 release of the Apple IIe card for the Mac LC also put another nail in Mark Twain’s coffin, because many users—especially educators—bought a IIGS for backwards compatibility with classic Apple II software. If you’re the number cruncher who had to choose between spinning up a run of 50,000 Mark Twains that cost a lot more to build than 50,000 IIe cards for Mac LCs that are already in production, already in a warehouse, or already sold to customers, which would you pick?

Now this is where you can rightfully criticize Apple for holding back the Apple IIGS. A Mark Twain with even a 12MHz CPU from ASIC would’ve been a legitimately powerful, well equipped computer for its price. But that would’ve been one last campaign in a war long since lost. Maybe ASIC could have helped, but the good ship Apple was sailing straight into a storm called the beleaguered era. Costs, staff, and projects were starting to spiral out of control, and the Mark Twain would’ve only delayed the inevitable death of the Apple II.

Sanyo, Nintendo, and the End of Acceleration

Even if Apple didn’t see fit to release a faster IIGS, accelerator cards kept the enthusiast market alive for a few more years. Upgrade guides for the TransWarp gave tips on how to install higher-rated ‘816s to get even more speed. This usually required buying samples of higher speed processors from WDC, changing crystals, upgrading the cache, and acquiring new GALs. Once you hot-rodded your card you’d often need to supply more juice over the 5V rail to keep things stable.

All this hackery was finally put to bed in 1992 when new 65C816s rated at 14 MHz hit the market. These chips took Usenet posters by surprise, especially after the ASIC saga. Fabbed by Sanyo, the 14 MHz models could run cool and stable at 5V and apparently solved the issues with the REP and SEP opcodes. Sanyo achieved this by abandoning the die laid out by Katherine Mensch and creating a new layout from scratch. Why Sanyo chose to invest in this is unclear—I found a lot of speculation that they wanted to build a PDA based on the ‘816. Sanyo’s a giant manufacturer, so I’m sure they found a use for it. Maybe WDC felt the heat from ASIC, or maybe they saw ARM and 68K pivoting to the embedded market and finally bit the bullet to stay competitive.

Existing accelerators could be upgraded by changing out their CPU and clock crystal, but by this point the IIGS was but a fraction of the ‘816s out in the wild. Optimistic estimates of the number of IIGSes shipped hover around 1.25 million. The other ‘816 system that most people know—the Super NES—sold 1.3 million units in 1990 just in Japan according to numbers compiled by NeoGAF user Aquamarine, who claims to have gotten them directly from Nintendo's Kyoto offices. The system sold nearly 50 million units worldwide in its lifetime. Mensch is very proud of the SNES using the 65816 and speaks highly of working with Ricoh, the manufacturer of Nintendo’s chips. Seeing as the 5A22 was a custom job by Ricoh, I wondered if they fixed the SEP and REP problem during its design. It wouldn’t be beyond them; they did tweak the 6502 to dodge patents when designing the NES’ 2A03. I haven’t seen the SNES assembler community complain about issues with REP and SEP with the basic 3.56 MHz part, but that doesn’t mean problems don’t exist. Same with emulation and FPGA groups, though I’d still defer to experts in that field. Checking a high-resolution decapped scan of a 5A22, the CPU core looks very similar to the first-generation layout seen on the Programming the 65816 book—I’m guessing it still has the REP/SEP flaw.

And like the IIGS, the SNES could take advantage of accelerators thanks to add-on processors in cartridges. The most well known is the SuperFX, but Ricoh also made a faster 65816. Better known as the Super Accelerator-1, the Ricoh 5A123 runs at a 10.74 MHz clock speed—three times faster than a 5A22. You’ll find it in fan-favorites like Super Mario RPG and Kirby Super Star. SA-1 games started shipping in 1995, which is late in the Super NES’ lifecycle. I haven’t found a decapped scan of the 5A123, but nintendo was shipping 1-Chip SNES consoles around the same time. The 5A122, which combined the CPU along with the SNES’ custom PPU chips, formed the heart of the 1-Chip console. Decapped scans of that chip exist, and its layout looks very similar to the post-Sanyo core. I’d bet that the 5A123 has that same core, and doesn’t suffer from REP and SEP issues.

ARM, Möbius, and The Road Not Taken

Apple discontinued the IIGS in December 1992. Even though the IIe hung on for another year or so, the platform truly died that day. Even in the timeline where WDC had 8MHz chips ready in 1983, and Apple put the GUI on the IIx in 1984, I still think the Apple II as we knew it would’ve died eventually. There’s several limitations that would necessitate an eventual migration to a new architecture.

The primary roadblock is the Mega II side of the machine. This is supposed to be what makes it an Apple II, but it drags the rest of the machine down with it. Depending on the video system for clock generation and timing was a practical engineering choice that became hard technical debt for almost all 1980s computer architectures, especially with ones that traced their roots to the 1970s. The GS is an excellent example of trying to maintain compatibility while adding capability, but it had an obvious ceiling.

But something that gets lost in the 65816 versus 68000 arguments is that CPU architectures have qualities beyond raw speed. You might care about power consumption, memory mapping, or ease of acquisition. And all these factors are a balancing act depending on your application. The 68K’s massive flat memory space was a big point in its favor, as well as its native support for user and supervisor separation. These don’t matter as much to someone who’s writing hand-crafted single-tasking assembly language apps, but they very much matter to someone building a multitasking UNIX workstation.

And that’s not to say that 68K isn’t good for assembly applications. It’s got a great reputation among assembly programmers. But as the world turned to compilers and development environments the 68K architecture was primed for the rising popularity of languages like C. More people were getting into programming, and it’s unrealistic to expect them all to have the potential to become machine language maestros like Rebecca Heineman or Nasir Gebelli. C compilers exist for 6502 and 65816, but it's fair to say that these architectures weren't particularly suited to the intricacies of C.

Another sticking point for high-speed 65816s is the speed of memory. Assuming you threw the entire Mega II side of the IIGS away and built a new ‘816 computer from scratch, a 14 MHz example would need very fast main memory to operate without a cache. In the early 90s, that kind of memory was barely available. How Woz would have built his 8 MHz IIGS in 1985 while affording a decent amount of memory is a rather inconvenient question.

Apple wasn’t the only company facing the limitations of their early designs. Commodore and Atari decided to pivot to the 68000, like Apple did with the Macintosh. Tech debt eventually caught up with them too, especially Commodore, but the problems weren’t unsolvable—the Amiga tried to migrate to PowerPC, after all. Even the Macintosh and IBM PCs had eventual reckonings with fundamental planks of their platforms. Another company was facing the same conundrum: Acorn Computers. The similarities to Apple are there: they were dependent on a 6502-based architecture and had a crucial education market with a significant installed base. Acorn did ship a computer with the 65816—the Acorn Communicator—but when Sophie Wilson visited WDC in 1983 and saw Mensch and crew laying out the 65816 by hand, it struck her: if this motley crew on a card table could design a CPU, so could Acorn. Thus Wilson and Steve Furber forged their own CPU: the Acorn RISC Machine.

Though today’s Arm and WDC sell very different CPUs, they do have something in common: they’re fabless semiconductor designers that license their cores and architectures to other companies. Apple is of course Arm’s most famous partner: they joined Acorn and VLSI to form Advanced RISC Machines in 1990 to create the ARM610 CPU to power the Newton. But what you might not know is that Apple’s involvement with ARM originates with a desire to replace the CPU in the Apple II. The little known Möbius project helmed by Paul Gavarini and Tom Pittard in the Advanced Technology Group was an ARM2-based computer that could emulate 6502, 65816, and 68000 code. Gavarini and Pittard started the project in 1986 and were demoing and compiling benchmarks in 1987—right on the heels of Acorn releasing the Archimedes! There’s little information about this on the web, with Tom Pittard’s bio and Art Sobel’s ARM pages being some of the surviving hints to its existence.

Based on my knowledge of how ARM2 works, I believe the emulation performance of Möbius is wholly derived from the ARM2’s memory system. ARM’s designers were inspired by the 6502’s efficient memory access and optimized the 8 MHz ARM2 to wring as much performance out of the available memory as possible. By using 4 MHz 32-bit-wide fast page memory, pipelining, and special load-store instructions, ARM2 could perform burst transactions at twice the speed of random ones. With a theoretical maximum of 32 MB/s bandwidth in fast page mode, this was eight times the maximum bandwidth of an 8 MHz 68K shackled to 2 MHz DRAM. This strategy would peter out eventually because memory speed couldn’t keep up with CPU speed, but hey—that’s what cache is for!

I’m not sure if Möbius would have been the Apple II’s savior—Acorn’s Archimedes wasn’t 100% backwards compatible despite including a BBC Basic interpreter and eventually a BBC Micro emulator. But with EcoNet network adapters and expansion podules to connect old Beeb peripherals the Arc could ingratiate itself with Acorn’s existing installed base. Could the Apple II have been reborn with an ARM CPU? Maybe. Nobody mentions how well Möbius integrated the rest of the Apple II architecture like slots or video or sound. And say what you will about Apple’s troubles with Western Design Center; ARM was barely a blip on the radar in 1988. Apple wasn’t going to upturn their cart for an unproven architecture from a competitor. Möbius was a skunkworks project; it could’ve taken years to turn its demo into a shipping product and the 68040 was already on the drawing board in 1988. But it was still worth it: Möbius’ benchmarks convinced Larry Tesler that ARM could save the Newton from the disastrous AT&T Hobbit processor. And hey—without ARM, Apple wouldn’t have the iPod, iPhone, and Apple Silicon today. So it worked out in the end.

The End of an Odyssey

What a ride, huh? Thanks for making it this far down a fifty-plus minute rabbit hole. I can’t claim that this is the final take on the subject—so many of the players aren’t on the record, but I’m pretty confident in saying that Apple did not artificially limit the IIGS’ clock speed during its development for marketing purposes. Now, I’m not a fool—I know Apple didn’t push the IIGS as hard as it could, and it was very much neglected towards the end of its run. If the REP/SEP flaws hadn’t existed and GTE could’ve shipped stable 4 MHz chips in volume, I’m sure Apple would’ve clocked them as fast as possible in 1986.

I’ll admit that I initially started this deep dive out of spite. The idea that “Steve Jobs deliberately held the IIGS back, bleh bleh” is everywhere, but it's all just people saying things in an endless game of telephone, with no actual evidence. That’s enough to grind anybody’s gears, but what’s worse are people who I respect uncritically repeating these things in videos and blog posts. It hurts me to see videos with millions of views repeating old Internet urban legends pushed by partisans filtered through Wikipedia articles with bad citations.

But that spite subsided quickly once I started untangling the messy web of people and circumstances that wrapped around this story. I realized that what I wanted wasn’t to prove anybody wrong. What I wanted was to get people to think about these stories and why they became legends. Of course, you could flip that right back around at me and say “who made you the guardian of love and justice?” And that’s a fair point. But my goal here isn’t to push an agenda, but to get a better of understanding of how things happened and why history went the way that it did. I’ve provided my evidence, and it’s up to you to judge if my argument is compelling enough.

And even then, one of those people who needed a refresher on computer legends was yours truly. I’ve done my share of repeating things based on bad sources, just because I had a fanboy desire to defend something. Or because I just thought a story was neat and didn’t look too deeply into it. It’s not so much about somebody who’s being wrong on the internet; it was the realization that I could be somebody who’s being wrong on the internet! It was humbling, really. As the years go on I’ve realized that there’s so much out there to learn, even if I thought I already was an expert. A perfect example is that Bill Gates BASIC easter egg, where I got tripped up by an oft-repeated legend until I actually dug into it. And as vintage and retro tech enthusiasts, are we not people of science? Are not our minds open to new ideas? We’re into this because we enjoy the history and personal connections, and we should all be excited about digging deep and not just repeating the same old story.

Even though the IIGS may not have been able to unleash its full potential, it’s still an amazing machine even at its base speed. If you haven’t had the chance to play with one, give it a try. And turn on accelerator mode in your emulator to get a feel for what could have been.