Creating a Dense Information Glyph

stenography has been with us for a long time.
It just needs a second pass to translate the shorthand to longhand.
So ship a small explanation with the shorthand and you're good to go.

Not everything must be overcomplicated.
It's like GPG with multiple recepients, instead of encrypting it for everyone in the recepient list, it gets encrypted to a generic dummy and the password to that is then encrypted for every recepient.

Your picture is about 14 kilobytes and wastes a lot of space.

65535

there, much more dense.

Of course not. You'll always need one of those, especially for "aliens" etc.
Intuition is partly learned, after all.

That's why I suggested sticking to shorthand/steno. that way your legend is also a clue to the system because it duplicates things.

For example.

Let's say your glyph is this ()
and then you write your legend
() |||||

Then the next glyph []
[] () ()

Then you write
[] [] ||||| ||||| ||||| |||||

Etc, things like that, intelligent beings would figure things out.

|
||
|||
||||
()
()|
()||
()|||
()||||
[]
[]|
[]||
[]|||
[]||||
[]()
et cetera

Yeah, but we made computers to display numbers like we read text because it's convenient (and how we're taught to read numbers)
A consequence is that we have to use a positional number system like we do now, or a tally-like system, like how I'd classify Roman numerals.
Heck, you could probably invent a positional number system where the digits are primes, but that sounds a great deal more recursive to implement, and would basically collapse into what he made in the first place.
It provides interesting number theoretical insights at a glance, unlike how positional systems neuter everything that isn't powers of the base.

Now, on implementing this in a really gay bit-level way, since primes combine with each other in an established way (multiplication), at the byte level (yes, still using 8 bits because fuck you), you could consider:
0000 0000 = STOP (or 1)0000 0001 = 20000 0010 = 30000 0011 = 5...0111 1111 = prime number 1271xxx xxxx = (reserve next x bytes; works additively if multiple in sequence)
Which is more or less how UTF-8 works.
Yeah, it's inefficient because I'm not considering how to represent the powers; not to mention that multiplication is commutative, but streams of bytes have order.
To borrow another term from the Unicode lexicon, a renderer can consider everything before a STOP as a composite character.
I spent far too much time thinking about this, and I have a lot of preemptive optimization thoughts, which is an indication that I should probably stop.

Is this fukcing Gallifreyan?

How can you have enough interest in tech to end up posting on this board but not understand basic information theory? If you condensed the entirety of a book into a single glyph on a medallion you'd need a microscope to read it. Sounds like you just really like the movie and want to larp it. Admittedly, it was a pretty good movie, but don't just cringelarp, learn some actual linguistics.

The alphabet is already a system of creating glyphs from a small set of unit symbols. Much easier than recognizing strokes in moonrunes, and unlike moonrunes you always know how a word is pronounced so you can verbally ask someone.

Maybe if you massively restricted the domain you could come up with a glyph system that's not utter shit. Emojis are like that. Granted emojis are still shit to an intelligent person, but for retards who use IM they are great because 90% of their "communication" is repeatedly stating the same handful of simplistic emotions anyway. So it makes sense to replace these common expressions with single symbols. But guess what, emojis are impractical as fuck without a dedicated input method, and even phone emoji keyboards are a mess to use when there's more than a few dozen emoji to choose from. Anyway, I bring up emojis because they're a great example of practical issues you'll have to solve.

I think moon pcs have an input method where you type the pronunciation and pick from autocompleted runes. If you pick standardized names for each emoji like :sad: (welcome to forums in 1999 lol) you could have the same approach. Or you could have a hierarchical menu system where you go statement->abstract->emotion->negative->sad->basic. But that's already as many keystrokes as just typing "sad". At least seeing 😞 is a split second faster than reading. As for designing the actual symbols the best rule of thumb is probably the more abstract they are the harder they are to guess, so rare glyphs should look like the thing they describe while common ones can get away with being squiggles. See: Everybody knows what :) is but wtf is

Also with schematics, you see the same principle of frequency vs. abstractness. Consider electronics: It's not quite obvious what ---/\/\/\/--- is, especially now that we live in an age of semiconductor resistors rather than literally long zigzag wires, but it doesn't matter because who doesn't remember it from seeing it all the time? Meanwhile something like transistor type is a lot less obvious if you haven't seen it before, but transistors are pretty common so people end up memorizing it easily. Now look at inductor with magnetic core: en.wikipedia.org/wiki/File:IEC_Inductor_with_magnetic_core.svg

With basic inductors, at least it looks like a spiral so you can tell it's something to do with a coil. Maybe you're smart enough to realize coils add a time delay so that's the purpose of the component. But with the mag.core inductor, good luck figuring out what it is from just the symbol. And since it's a relatively rare component, many people will need to look it up so this is a failure at effective symbology in my opinion. Admittedly real electronics engineers would have learned all of this in training, but since you can never learn all of a language, much less a secondary conlang, your case is similar to the clueless kid who only took circuits 101 and that's about it. That guy is better off just making a box on the circuit diagram and writing "inductor w/ mag core" than using this symbol.


Why would you do it for numbers? There's already a great system to represent numbers, invented by Arabs centuries ago. It is far superior to your circles. Numbers in general have lots of highly optimized representations from science and engineering. You don't have much value to squeeze out with glyphs. You should instead start designing glyphs for abstract concepts, such as emotional judgements, political/ideological concepts, software stacks and architecture. These are complex topics hard to get across in normal language but without a good pictorial/symbolic system. Stay way from anything math, philosophy, music, electronics or chemistry, they already have very specialized systems that work much better than you're likely to come up with.

Maybe phonetics could also be a decent domain. In IPA and systems like Pinyin, many letters are assigned at random so you have to memorize the alphabet first. You can come up with an alternative system of representing phonology where you can guess from each sound's symbol roughly how it's supposed to be produced.

There are a lot more distinct patterns in a room than there can be in a black-and-white glyph on paper.
In what way? How has the IT boom increased the amount of information that can be packed into a symbol on paper for practical human reading?
No, but if you let an optimization algorithm run for millenia in many parallel instances and find that they all end up with fairly similar results, there is a decent chance you've hit an equilibrium of some sort. Chinese symbols are completely conceptually different to the Latin alphabet, yet books written in either system will tend to take up a similar amount of space. In fact, just the fact that nobody is trying to squeeze more information in by shrinking font sizes to borderline unreadability should tell you where priorities lie in the practical aspects of getting some data from a page into your brain.

By increasing density you increase error probability, and decrease certainty. You need to focus more closely and spend more time reexamining potential errors. I think OP raises an interesting question, no doubt, and perhaps there is some room for improvement since before modern day part of the limitation was mechanical reproducibility, but I wouldn't expect anything too radical. The key point of interest is finding some rigorous approach to determining the limiting factors of information density in visual representations like these.

🌳🍒🌳🍒🌳🍒🌳🍒🌳🍒🌳🍒🌳

🌳🍒🌳🍒🌳🍒🌳🍒🌳🍒🌳🍒🌳

👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀
👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀


🤷‍♂️

I don't think that's the case anymore tbh 😜😜😜😍

✨✨✨😆😆😆🐱‍🚀🐱‍👓🐱‍🐉🐱‍💻🐱‍🏍✨✨✨

You're not gonna gain much by using phonetics. Phonetic writing systems exist (my own native language uses one, and the Latin alphabet was one originally). You will end up with much the same density unless you introduce way more sounds than people can remember or distinguish (another factor that has already been optimized for bt evolution). At which point it just becomes an excercise in rote memorization of symbols or parts of symbols like the electrical schematic shit.

I don't think it really matters what kind of information encoding you use, the limit is more about the practical limitations of human vision and visual input processing - make it smaller and reading takes longer/is harder. The more distinct the patterns are, the smaller they can be before they fade into uncertainty - but the fewer the patterns there are in total, the more distinct they can be.