You are on page 1of 3

The RCA 1802 and its competitors

-------------------------------Date: Tue Jan 18, 2005 6:43 am


Subject: Re: [cosmacelf] Re: 1802 History
RCA's CMOS was a logical process to use [for its time.] A chip with as high a ga
te
count as the 1801/1802 would have been impossible in TTL -- it would
have cooked itself to death. Most microprocessors were PMOS, but that is
a slow process (like the Intel 8008 and TI micros mentioned). The really
high-tech chips of the day were NMOS (like the 8080).
RCA's CMOS made perfect sense. It was very low power, yet relatively
fast (certainly compared to PMOS). Noise immunity was very high, letting
them differentiate their product so they didn't have to compete
head-to-head against Intel, Motorola, TI, etc.
Dec 9 2005
Hughes Aircraft second sourced the 1802, and their HMDS development system was
an S-100 computer with (mostly) generic S-100 boards. It had two CPU boards
(Z80 and 1802), and a Micromation DSDD 8" floppy disk controller card. The
Micromation board had no disk controller chip -- just a few dozen TTL ICs!
They patched
The 1802 did
I think that
at 8/7.5us =

the Micromation board to be memory-mapped instead of I/O mapped.


disk read/write using DMA, synchronized by using the WAIT line.
technique would have worked even with a 1.44meg floppy and 1802
1.067 MHz.

The HMDS was unique in another way; its 40x24 character video board used a
40-character FIFO which was loaded with one line of characters, then
recirculated 8 times to produce the display. Since the 1802 only has one DMA
channel, it could generate video or do disk I/O, but not at the same time;
the screen went blank as each sector was read/written, etc.
Jan 5, 2006
I've found that 1802 programs are usually smaller than their equivalent for
other CPUs. For instance, Tom Pittman's Tiny BASIC was written for the 1802,
8080, 6800, 6502, and Z8. They ranged in size from 2k to 3.5k; the 1802
version was the smallest. So, I have few doubts that you could keep the BDOS
and CCP at the same size without having to leave anything out.
1802 programming is a bit different, however. Efficient coding requires a
different approach than it would for a memory-oriented CPU like a 68xx or
6502, or for a register-limited CPU like the 8080.
Mar 9 2006
The 1802 instruction set is only "weird" if you know other instructions sets.
To a beginner, this is irrelevant.
The nice thing (to me) about the 1802 instruction set is that it is
simple to understand. My first microcomputer was a Mark-8, which had an
8008 CPU. It was a terrible, convoluted, confusing mess (both hardware
and software). Then I tried the 1802, and it was like a breath of fresh
air. I could finially say, "Aha! NOW I understand..."
Jan 6, 2006

Lee Hart wrote:


>> I've found that 1802 programs are usually smaller than their
>> equivalent for other CPUs.
Allison Parent wrote:
> Call be sceptic on that. A lot of the code I've been reading is
> bigger. I have seen TB for 8080 that was quite tiny...
The smallest 8080 Tiny BASIC I've seen was 1.5k, but took some drastic
measures such as all math being hexadecimal, no parentheses, and no operator
precedence. 8080 Tiny BASIC with the same features as Pittman's was 2.5k
bytes. The 1802 version was 2k bytes.
Pittman's opinion was that the 1802 was the most memory-efficient of all the
contemporary CPUs. I have to agree; I've done a lot of assembler for both the
8080 and 1802, and the 1802 wins. My IDIOT monitor is just 512 bytes, and
that includes a bit-banger software UART. 8TH (a version of FORTH) was under
4k bytes.
The 1802 certainly loses on speed, however. It takes more clock cycles to do
most tasks, and given that its clock speed is similar to contemporary CPUs
(8080, 6800, 6502, etc.), it is always slower.
> The 8080 is register limited but it also does 16 bit math adds, has a
> stack and call instruction, byte compare.
The 8080's sole 16-bit math instruction is DAD, which is an unsigned add of
two 16-bit registers. That's not enough for serious 16-bit math; you need
subroutines to do any serious work. DAD tends to be a rarely used instruction
in 8080 code.
The 8080 has CALL and RET, so programmers tend to use them. This has come to
be considered "normal programming practice". But they are the longest (4
bytes to CALL and RET) and slowest instructions (8 bus cycle to CALL and
RET). It's easy and obvious, but tends to make rather inefficient code from a
speed and memory point of view.
The 1802 uses a different
subroutine, so "call" and
(2 bytes and 4 bus cycles
of subroutines, this is a

technique; it assigns program counters to each


"ret" are shorter and faster with no stack activity
to SEP and RET). If you have a small enough number
more efficient technique.

When 1802 programs become complex enough to outgrow this simple technique, the
instruction set lets you build custom CALL and RET macros to suit the problem
at hand. SCRT (Standard CALL and RETURN Technique) is one; it imitates what
the 8080 and "normal" CPUs do. But when I was implementing 8TH, I found it
faster and easier to use other techniques.
It boils down to the old CISC-vs-RISC debate. Is it better to have many
complex instructions that do a lot, but restrict what you can do and take
many bytes and clock cycles? Or a few simple instructions that do little but
are great building blocks, small, and execute quickly?
The 1802 is more like RISC or microcode; you tend to see certain pairs of
instructions frequently used to build more powerful instructions. If you are
trying to build "standard" programs, this is a nuisance. But if you can
re-think your problem to suit the instruction set, you come out ahead.
-Nov 2, 2004 4:43 pm
Subject: Re: [cosmacelf] Re: I've built a working replacement for the 1861 (long

)
[In today's microprocessor-based designs] there are lots of
complicated solutions, and expensive solutions, and hard-to-build
solutions, and solutions that use unobtainalble parts. These aren't the
kind I'm looking for.
What attracts me to the 1802 (then and now) is that it tends toward
simple, elegant solutions. It demonstrates how much you can do with very
little. A kind of "zen" computer.
The 1861 is [another] such solution. A true one-chip solution, that provides a
basic video display. It is simple to build, simple to use, and simple to
understand how it works (at least, by video standards). Isn't it
interesting that today, 25-30 years later, we can't find a chip that
does the job as well?
Sure, we have lots of far more complex solutions. We can use a PLD
(Programmable Logic Device); but it will have 100s of times the gates
and use 100s of times more power, and it requires access to an expensive
programmer and knowledge of how to use it. We can use another computer,
a PC or whatever, to simulate the old 1861; but this is even more
complex and is only practical if we already have that $1000 computer for
some other purpose. We can use surplus parts, that we bought for 1 cent
on the dollar; but that leads to a special-case solution that can't be
duplicated.
So, the fun for me is trying to thing of another simple, elegant
solution, that is as good as the 1861 was. Is such a solution possible?

You might also like