/ home / blog about

Towards Manufacturing Logic Devices at Home


open source rules the software ecosystem. we have Linux, GNU, and all the applications on top. your device is near-infinitely hackable... at the software layer.

with Risc-V, Framework, and earlier projects like Novena, we're beginning to achieve some of this same hackability at the hardware layer.

but the base layer -- the actual manufacturing of integrated circuits -- remains this impenetrable wall. players like Sam Zeloof do some (small-scale, proof-of-concept) IC manufacturing at home. and there's some defunct projects like HomeCMOS. but otherwise, the space is close to a vacuum. how do we make inroads?

TODO: add links for the above

We Don't Need Perfection

we don't need 5 GHz clock speeds and mobile-level power-consumption out of the gate. even a 10 kHz, 1000 logic-element device has some value (say, as a keyboard driver). once we have a foot in the door, incremental improvements can make the technology more widely applicable. our problem of open-source computation is one of bootstrapping.

Silicon devices aren't friendly to manufacture. the equipment is expensive, the chemicals are dangerous, the tolerances are small, and the physics/chemistry is college-level to Phd-level.

when i was in middle school, i learned about electromagnets. my dad took me to the hardware store and we bought several iron bolts, annealed copper wire, paperclips, thumbtacks, and a lantern battery. within a day, i had a two-bit electro-mechanical adder. each bit was represented by three tacks and a paperclip. the paperclip was fixed to one tack (A) and freely rotatable such that at one rotation it would bridge an electrical connection to tack B and at a different rotation it would instead bridge a connection to tack C (logic '0' and logic '1', respectively). by introducing an electromagnet, i could conditionally pull the paperclip to one orientation or another. it was in effect a relay, and since the tacks and paperclips could conduct electricity, the relays could feed into each other.

relay computers were a thing. and they're simple enough that a motivated 12-year-old can build one. they have a few notable downsides, but the largest for us in this context is that they're very labor intensive to manufacture/assemble.

Non-Silicon Forms of Computation

Wikipedia has an entire page on Unconventional Computing, and it's not even complete. there's also Fluidics, Optical Computing, Chemical Computing, Electrochemical Transistors, and of course, vacuum tubes!

these things aren't just the realm of theory. they all had use before the reign of the transistor:

the point of listing these is break the misconception that computation must take place on silicon, or even semiconductors in general. the most alluring technology for my purpose looks to be ferromagnetics.

magnetic core memory was widespread before we had integrated circuits. the Apollo Guidance Computer used core memory for its program data. lesser known, the Elliott 803 repurposed core memory into logic gates.

magnetic cores are really simple: you have some ferromagnetic material which has some interesting, often nonlinear relationship between its electric and magnetic field, and you have some conductor (copper) to carry these fields from one place to another. that's two, easily procurable materials, plus an insulator (air). and it's all solid state and (at a high level) operates on classical mechanics -- making it scale independent: a 1cm core behaves roughly the same as a 100um core, only with power scaled accordingly. you don't need precise fabrication, so with some effort it should be possible to manufacture an integrated circuit out of these materials using something like selective laser sintering (which could pair nicely with non-electronic home-manufacturing).

Ferromagnetic Computing

it's widely known that any combinatorial circuit can be reduced to a representation which uses only NAND gates (this includes flipflops, allowing not just combinatorial logic but also volatile storage, and hence general-purpose computation). one could alternatively use NOR gates, XOR gates, or inhibit gates (Y = A and not B).

the Elliott 803 demonstrates a sort of inhibit gate, at an analog level: Y = A - B (based on the gate wiring, it could perform Y = +/-A +/- B +/- C with the sign configurable). the reason it required transistors is because these analog operations were lossy, and needed amplification to achieve the desired digital operation. that is, the "inhibit gate" might actually look something like Y = 0.5*A - 0.5*B and in order to feed this into the next gate you would need to amplify Y by 2, which a transistor does nicely.

but that's not to say one couldn't perform this amplification using ferromagnetics. in fact, magnetic amplifiers are all around us: transformers.

can we assemble these components into one of the primitive digital logic gates?

Ferromagnetic Cores: Theory/Operation

iron (ferrite) is among the materials on earth which have interesting coercivity properties: that is, the material experiences some internal changes during the application of an external magnetic field. you can see this yourself by finding some volcanic rock or visiting a volcano: liquid magma emerges from the earth, and this magma contains iron. while liquid, the iron has low coercivity: its magnetic domains will orient along the earth's own magnetic field. when the iron cools, its coercivity increases. take the rock somewhere else on earth, and its domains won't reorient; place a compass next to it and that compass will point as if you were at the location where the rock was created! this effect can last for a substantial amount of time: it's one of the ways we know that the earth's magnetic field has changed throughout history.

we see immediately that the iron in this scenario is storing information.

meanwhile, Faraday's Law of Induction describes the symmetric relationship between a time-varying magnetic field, and the electric field (voltage along some loop). in its basic form, we can wrap a wire around some material, apply a voltage across that wire, and induce a change in the magnetic field over that material.

by pulsing a voltage in one direction or the other, we can induce a change in the material's magnetic polarization, and that change can persist after the voltage is removed. if we choose a material with the right coercivity properties, this can cleanly store one bit of information.

the above curve describes the magnetization (M) of a material as the applied field (H) varies (image: edited, original by Nanite). if the material starts unpolarized and we apply a positive field, its state will move up along the right half of this grey curve. remove the applied field and M will fall slightly, to where the blue dot is. apply a positive field again, and M won't change much. apply a negative field, and M will repeat the process in reverse, settling at the star instead. crucially, applying a small negative field won't change M much, so the data storage is resilient to some amount of noise.

TODO: need to explain somewhere that "square-loop" is what we want -- most importantly, the thresholding allows us to amplify partially polarized states to a fully polarized state, giving us well-defined logic levels.

Reading and Writing Bits

here we've got an iron toroid with two separate wires coiled around it. the device is symmetric, but consider the left loop the "drive wire" and the right loop the "sense wire".

we can write a logic '1' (move the material's state to the blue circle) with a clockwise (CW) pulse through the drive wire, and write a logic '0' (black star) with a counter-clockwise (CCW) pulse.

we used Faraday's Law to show that an external voltage can induce a changing magnetic field and cause the material to change state. but this relationship goes in both directions: a changing magnetic field also induces a voltage. consider the transition from logic '1' to logic '0': as the magnetic field changes, this induces a voltage around the wire loops, and we could detect this by attaching a voltmeter to the sense wire. on the other hand, if we applied a negative field to put the device back to '0', and it was previously in '0', there's no significant change in the device's magnetic field: the voltmeter would show a much weaker signal.

so we can write a bit by pulsing the drive wire either CW or CCW, and then read it back later by forcing the device back to '0' with a CCW pulse. this is a "destructive" read, because it destroys the state of the device, but it's still a way to store data across time.

the above oscilloscope images show these two scenarios respectively. we apply a CCW pulse to the drive wire (yellow line) and then monitor the voltage across the sense wire (purple line), loaded with a very low-value resistor.

there's always some residual output onto the sense wire -- via inductance from the drive wire if nothing else -- but the output during a 1 -> 0 state transition (top plot) shows substantially more energy than the 0 -> 0 case (bottom plot).

if we wanted to recover a binary signal from this, we could place a capacitor across the sense wire, connect that to a comparator (opamp), and observe a clean logic-high or logic-low voltage at the output.

Logic Gates

in the 60's, one would wire hundreds or thousands of these ferrite cores into a grid, like this:

(photo from Konstantin Lanzet)

3 wires are routed through each core:

the diagonal wire serves the role of the sense wire in our earlier example. we wrapped our wires around the core but you can actually just route them through the core to achieve the same thing, just with lower fidelity.

the 6 green and 8 red wires serve the role of row select and column select, respectively. by driving a specific row and a specific column simultaneously, exactly 1 of these 48 cores would be written. the drive currents are calibrated such that the field from any one of these signals would lie somewhere in the inactive region of our M-H curve from earlier, but the fields from both of the signals added together would be enough to move the core substantially along this curve.

in this way, these core memories from the 60's were already performing 'AND' operations.

the calibration for this is tricky though. for our purposes, an 'OR' gate provides more reliable operation.

imagine the core depicted above is polarized CCW. a pulse on either A or B will push it toward the CW polarization. then a pulse on CLEAR -- which is wrapped in the opposite direction as A or B -- will push it back into the CCW polarization.

this is a simple OR gate. for signals on the wire, we treat a pulse as logic '1', and a lack of a pulse as logic '0'. calibration is simpler than the memories from above: just tune the signals to be as strong as possible. if any '1' arrives to the core, then its state gets flipped to CW, and when we apply the CLEAR signal later we get a pulse on the SENSE wire (logic '1'). if no '1' arrives to the core, then the CLEAR signal does nothing and no pulse is produced at the SENSE wire (logic '0').

if you're keen, you'll notice that the sense wire gets pulsed not only when the CLEAR signal transitions the core from CW to CCW, but also gets pulsed (in the opposite direction) when A or B transition the core from CCW to CW: i'll address that later.

for now, consider a slight modification: what if we added another control signal, and inverted the polarization of A and B?

the operation of the device now looks like this:

  1. pulse SET to polarize the core CW.
  2. allow some time during which a pulse may arrive on A or B to polarize the core CCW.
  3. pulse RESET to polarize the core CCW, and observe the SENSE wire.
  4. (repeat for the next operation)

this time, the device defaults to logic '1', but if either A or B receive a pulse it gets flipped to the logic '0' state before its read. in this sense, it's a NOR gate: one of the primitive gates from which we know any combinatorial logic can be built from.

in practice, 5 wires through a core may be unwieldy. if we remove the 'B' input above, we're left with an ordinary NOT gate. so we may prefer to construct our circuitry from 4-wire OR and 4-wire NOT gates instead.

alternatively, if we treat the blue/red wires as data, and the green wires as out-of-band control signals, then we don't need separate SET/RESET wires: we can remove the RESET wire and do this:

  1. apply positive pulse to SET to polarize the core CW.
  2. allow some time for a pulse to arrive on either A or B.
  3. apply negative pulse to SET to polarize the core CCW, and observe the SENSE wire.
  4. (repeat for the next operation)

it's not too crazy: the SET signal is sort of behaving just like a differential clock signal, switching from +V to -V to +V to -V ...

Cascaded Logic

so now we've got our primitive logic gates. we should be able to assemble them into something greater. a good thing to build first would be an inverter chain. just take our clocked inverter, build 3 of these, and wire them in series, right?

(i'm using the 3-wire inverter just discussed, where SET/CLEAR are folded into one CTL wire).

but it's not that simple: we have a few stray signals we need to look closer at first.

consider the case where all cores are polarized CW (logic '1'), below (the circular arrows in the center of each core indicates its polarization).

then we apply a positive pulse to core S1 in order to force it into the CCW state:

as S1 transitions from CW to CCW polarization, it dumps current as shown onto the wires connected to it.

after some time, this polarizes the neighboring cores.

we successfully transferred data from S1 to S2, but in the process, destroyed whatever other state was in S0!

it's actually worse than that. imagine another core after S2. when S2 changes state during this sequence, it'll dump current onto its output too, and thereby effect things downstream of it. in this naive inverter chain, a transition anywhere can force an arbitrary number of downstream or upstream cores into the '0' state.

but consider if S2 were already in the '0' state before this transition. it wouldn't undergo any transition, and hence wouldn't output any signal, and therefore nothing downstream of it would be effected by S1's transition.

this hints that we can isolate data transfers by inserting buffer cores into this chain which are fixed at the '0' state during the CTL1 transition.

TODO: add CTL2 0V signal; CTL0 and CTL3 should be +Vdd instead of negative; make the buffer cores be non-inverting for brevity i replaced the visual polarizations with their logic values and whatever way they're transitioning. note that current still flows into the buffers, it just doesn't do anything. crucially, no current flows out the other end of the buffers.

we keep the two buffer cores (CTL0 and CTL3) at '0' by driving them with a negative voltage. not strictly necessary, but the real circuit experiences things like reflections which would otherwise nudge the buffers away from their set point.

finally, we can tile this group of four cores to construct inverter chains of arbitrary length:

note that i've annotated only two of the cores as having a state: each of these two outlined "inverters" carries only one bit, with the rest of the cores being used as buffers.

you might think that when we cascade these devices, we could remove the input buffer of the second device because it's already guarded by the output buffer of the first device. unfortunately, that's not the case: i showed that buffer cores are needed when we discharge the inverter, but after we discharge it we need to charge it back to logic '1', and that requires an additional buffer.

but this does present a problem: each stage of this chain performs 4 inversions -- so externally it no longer behaves as an inverter, but just a shift register. we can solve this by inverting the wiring on 3 of these cores, and leaving just one inverting core:

notice the wire sections which were previously blue but now red: any pulses they carry are sent "into" the core (into the page) instead of "out of" the core as before. hence, pulses on the red wires have a tendency to write logic '1' (CW) to the core they feed into, whereas the blue wires write logic '0' (CCW) instead.

as data arrives into this device, it's immediately inverted, and will later be propagated downstream. if we're deliberate with our control signals, we can cascade these inverter devices without issue. here's what that looks like over time:

TODO: CTL0 and CTL3 has wrong transition: should just copy the cycle4 diagram and change the numbers TODO: the CTL0/CTL3 0->1 transitions are wrong.

S2 arrives as input to the first device, which can receive data because CTL0 is at 0V (or even just left floating). this write polarizes the core away from logic '1', hence the core is driven toward the inverse of S2, i.e. S̄2.

simultaneously, the fourth core was previously storing S̄1. it's actively driven back to 0, dumping S̄1 onto its output. the second device behaves identically to the first, inverting its S̄1 input back to S1 and simultaneously outputing S0.

over the next three CTL transitions, data is moved internally through the device. these CTL transitions cause current to flow in the device's input/output loops, but by watching the interface between these devices we can see that this only happens when the adjacent device isn't open to writes (that's the point of our buffers).

that last CTL transition is just a repeat of the first image. if we were to add an additional input winding (blue wire) to the first core in a device, then that device would now serve as a NOR gate. so we've got a framework for cascading, synchronous logic: use this four-core device as our primitive logic gate and arrange it into whatever circuit we want. the control signals are nastier than with CMOS, but the concept's there.

TODO: discuss simulation, show results

TODO: discuss gain stage