Correct me if I'm wrong here
Transistors "turn on" at a certain voltage. Taking MOSFETs into consideration, they have three on states. One(linear) acts basically like a switch, which is used for digital ICs, one is called Sub Threshold where the properties are different to the last mode, saturation, both these modes are usually used for analog shit.
The problem is these modes are very variable when compared to the triode mode with a lot of considerations like swing(how your signal can oscillate in order to not take the transistor out of saturation) and speed, among other things I can barely remember.
In my not very informed opinion I think photonic ics would suit "analog" or multi bit transistors better considering you can just increase/decrease the intensity of light without worrying about much(again I don't know much about photonic crystals) using constructive or destructive interference, but photonic crystals are apparently not yet scalable to the level of transistors, are fairly complex and I have no idea how you'd implement bitwise ands/ors/shifts to them or make PGAs out of them.
OK so hear me out
Strawman.
Strawman.
Strawman.
Not even the simplest cell has been created, much less an evolution from those cells to any other form demonstrated. Fabrication.
Stopped reading your post there.
Forgot to respond to this part. There is no continuity between RE and higher level languages. That is, no amount of AAABBB or permutations thereof would rise to higher level of expressivity beyond the hard boundaries of what a regular language can express. This is why you cannot just rely on some poorly defined extrapolation.
maybe digital was a necessary step because it makes the machines easier to understand, and now that we are getting close to true AI, we can use intelligence beyond our own to design analogue systems that surpass digital systems
All but one section of most processors are solid state. What I was attempting to say was that with MMX and a few instructions from prior units the mechanical/internal issues caused with predictive branching have more to do with the excessive quantity of overly specialized sections of modern processors that are above the base x86 line.
In a modern CPU there may be as many as 50-90 different blocs for adding. Some are for outcomes that are whole number only, some are for outcomes that are decimal and negative only, etc. The rail is set up to yield to the bloc that returns first which often causes extremely unpredictable cycle costs. You can add two randomly generated numbers both whole and decimal and do so 250 times and result with cycle costs of 10-50 almost randomly.
One of the better approaches that MIT and Harvard have recommended is a system that uses patterned escalations.
By reserving the numerical space above 99 data/code execution would be as follows. Say the code for add is 0xFE. In this example to add in ASM would be as follows.
FE1224 -> 36
Overflow -> 0
To multiply you would say.
FEFE1224 -> 33
Overflow -> 1
By reserving the space above 99 the code/data read is reduced to simple comparative logic. The pools are sequentially latched in a strictly additive process.
They argued that in this way the various pools could be made significantly smaller with a much higher degree of Lego like flexibility. An escalation based system would provide the efficiency of an ARM with the parallel power of a GPU by adding routing instructions that are noticeably absent in modern architectures.
It didn't catch on due to a higher memory requirement at a time when 4-32k of memory was very expensive.
I you really wanna know how bad some of these companies are ripping us off...
The x86 design has a really bad nexus that requires you to mode-switch a lot. Almost 1/3 of all operations require wasted cycles just to repeatedly put the processor back in 64-bit mode for math at great expense as the operations are still computed in 8-bit chunks...
A killer way to fix this is to make all operations ISO compliant to the 64-bit float standard. Instead of using a code to flip the mode to 64-bit, load a, load b, add a and b, then store the result in x, and finally write x to some address.
They could just use a compound register that held the result of add, subtract, multiply, divide, pow, and sqrt for any number currently stored in mA and mB and just use a code to select result at 1 to store at some address. The code would be reduced to load a, load b, write result of subtract to some address. No flips all math computed at strictly 2 cycles. No need for mmx, sse, etc...
Who’s Enstein, genius?
Please call them MLfags. AI is a far broader field than those fucking statisticians make it out to be.
The real future is Duotronics. Until that is fully implemented any improvements will be minor.