Creating a Dense Information Glyph

INTRO
Different systems of information transfer have existed for millennia. Example pics related. In the movie Arrival, we learn of an alien "alphabet," which is a series of dense glyphs containing the entirety of a sentence in them. Sometimes they are combined, but let's focus on singular ones for simplicity. I have searched but cannot find any existence of such a concept being used in history. What I mean exactly is a sort of symbol, glyph, or intricate and complex icon, which contains a large amount of information. For simplicity, I'll call these Dense Information Glyphs, or DIGS. A DIG would be human-readable, unlike a QR code. When presented with a QR code, a human can readily recognize what it is, but not the data. My goal is optimal information transfer in a single point of human-readable data. Skimming text may be faster, but a paragraph can surely be condensed. Imagine the entirety of a book's knowledge condensed to a single glyph. Intricate, yes, impossible, no.
CURRENT DRAWBACKS
Alphabets, roman based or others, are insufficient. Stringing words together and then sentences and then paragraphs is a great way to represent speech on paper, but not the best method of transferring information. You can easily understand a paragraph's worth of description in 3 seconds of first hand observation (i.e. the car drives by and you hear the sound and see the setting and feel the pebble in your shoe etc). It's wasteful in time and can be ambiguous. Furthermore there are language barriers, and anyone speaking more than one language understands untranslatables and loss in translation.
These are word alphabets, to be concise. Each symbol represents a concept or thing, and is pronounced. Asian languages like Chinese have these. The problem is degradation, the symbols look worse and worse over time until they do not LOOK like what they represent, and wastefulness in time. In the end, you will still be left with paragraphs to explain something.
Egyptians understood that pictures, drawn well (better than the chinese, at least), could be easily read. This served to preserve knowledge, as the errors of mistranslation are smaller. Someone will still interpret the glyphs, and could do so wrongly, though. However, like chinese, hieroglyphics is not condensed either. There are letter like glyphs and sounds and compounds (combining more than one to create a new meaning), etc.
DIGS
By condensing information to a single glyph, it promotes transferrability and portability. A book's knowledge can be put onto a medallion, thus preserving it. By neglecting nuances of language, and useless words, we increase efficiency and reduce language barriers. Perhaps over time, humans would become proficient at decoding (they don't have to speak words in their heads as they read), taking less time to understand the meaning. Because the glyph is singular, it would become an image in the person's mind which would be more memorable. The answers would all be there, waiting to be decoded. Unlike a cipher, the goal not to obscure or obfuscate the information, but to make it the most palatable. An added benefit is preventing AI censorship, as no computer would be able to, at least initially, decode the symbols. Think of captcha but on steroids.
By designing a glyph in such a way that simple information is quickly extracted and recognized, it makes a system of embedded learning. Furthermore, if you tell someone that the answer is right in front of them, hidden in a little drawing, they will become enticed. It is a great tool to encourage learning. A problem in society now is not that information is unavailable, rather that it is not sought out or that it takes too long to learn. Rather than handing someone a book they wont read, they receive an image. In that image they immediately recognize information that is the most fundamental to the subject. Over time, after further analysis, the specifics, details, and expert knowledge is found in the intricacies of the design. The fundamentals might be the shape of a feather while precise measurements are found between the fibers.

tl;dr: how do we make a super dense version of the arrival language into singular glyphs that people can understand

What methods would be best? Is there a way to do this freely without setting heavy standards or huge keys and legends? Or do the keys and legend make up a big part of the glyph? Perhaps styles of DIGS could be created, where each one can be decoded with the "group" key? The objective here is information transfer NOT obfuscation. Looking for Zig Forums opinions on this.

Attached: image.png (2400x1800 46.05 KB, 195.9K)

Other urls found in this thread:

forgottenlanguages-full.forgottenlanguages.org/search?updated-max=2018-12-19T09:47:00+01:00&max-results=1
en.wikipedia.org/wiki/File:IEC_Inductor_with_magnetic_core.svg
masonicdictionary.com/trestleboard.html
gregapodaca.com/numerography/files/019.html
twitter.com/SFWRedditVideos

I'm not saying they are obsolete, but that they do not fit the purpose of a dense glyph.
DESIGN
Standards would determine how a design is interpreted. I'm not excluding letters as symbols, or even whole words, but you can only use so many before they get in the way and defeat the purpose. A set of standards could be made for creation and reading of a glyph. Lines made a certain way mean a certain thing. Or, every single point and space matters. If you choose NOT to use precise numbers or math, you must use a null symbol in that place, or denote those numbers to something in your subject.
A key or legend would aid the reader. It could easily be "packaged" with the glyph, simply next to it. It would serve to clear any doubts a reader might have about what a designer did differently. Over time, one could imagine, famous glyphs may be cropped, and the key may be lost. To prevent this, it must be incorporated into the design, integrated in a way that prevents it from being excluded.

Humans can decypher QR codes. You just have to learn it first and it's really tedious.
The pictographic part of Chinese writing is very, very, small.

You might want to check Blissymbols for an attempt to be as pictographic as possible. For a precise conlang (constructed language), check out Lojban. There's also a conlang called Ithkuil that can represent the equivalent of a short English sentence in a single glyph.

those are all really cool, thank you. I had never heard of Ithkuil, but the examples I saw were pretty impressive. Looks outright alien, too.

The idea is pretty nice, but it's practically limited by the bandwidth of the human brain. There is only so much it can take in and process at the same time. Considering that information density is one of the parameters that the evolution of existing visual encoding techniques such as alphabets, pictograms and logograms has been optimizing for already it's doubtful that you can get something much better.

Consider the fact that density is at odds with generality and interchangeability. The more you encode in a symbol, the more symbols you need overall. The human brain is really good at quickly recognizing distinct categories of image, while deciphering unique, dense imagery will probably be a tedious process that takes conscious effort every time because the symbol is always new.

you're describing what occultists have done for centuries

Attached: bwmastercarpet.jpg (1465x2300, 556.5K)

[citation needed]; in other words, what _exactly_ does this mean?
Special Forces and three-letter agents can train themselves to be able to intricately describe the contents of an entire room, at a glance. Before you say "but not everyone's like that", not everyone (anyone) can read until they're taught. You need educational training for any task.
Really? So evolution has stopped? The current instance of mankind is the best possible? Furthermore, if this is true, why are there so many languages? Yet one skilled in linguistics (meta-language) can understand a never-before-heard language in a relatively short amount of time.
This seems practically true, but the recent IT boom seems to counteract that truth. When implemented correctly, density can become interchangeable. OP has already spoken of keys or legends. These could very well aid in intercompatability and mobility.
You're thinking to one-dimensionally. There wouldn't be a different symbol for every piece of information, yet rather there would be a standard for compactly displaying --in a human-friendly way-- information.
Again, see the my response to the previous point. And look into how people are able to train their minds.


With the proper education and development, I could see such a system really working out. OP, do you have any other information or ideas?

I can't even focus when I read a book; I end up skimming over entire paragraphs. Chingchong was unironically too high IQ of a language for me to learn. How in the fuck am I supposed to read your magic DIGs? Is this supposed to be some kind of Dark Ages brainlet filter from being able to converse?
Unironically and realistically though, I feel that what you're trying to achieve isn't possible with human limitations. You propose a denser medium for communication, something that is readable in a human way, while not necessarily being simple for a computer to figure. Currently, information is already being lost each and every time we speak, because interpretation of a language is always different between individuals. By increasing the density of an information glyph, you're increasing the room for interpretation, and more information will be inevitably lost.
Unless I am misunderstanding you, and you are looking to achieve some form of universal, unmistakable language, in which case, isn't that what conlangs are for?

A spiraling writing system is used for Gallifreyan shown in Doctor Who series. It worths noting that the Ithkuil language allows extreme information condensation yet uses an alphabet as writing system.

>>>/h8s/

As it seems nothing has been created in the exact way that I am mentioning, it would only be logical to create such a system myself.

Yes, and the concept is similar to what I mean, however their goals still remain to be seen by "those who have the eyes to see," and therefore are purposely obfuscated.
I think that it is important to note that in such a system, there wouldn't be an ambiguous interpretation free for all. Rather, it would be precise and obvious. The only consequence should be time spent analyzing, yet all humans would come to the same precise results. Over time, yes, those who practice would get quite proficient at understanding them quickly. They would immediately see the abstract and introduction, the main idea, and then spend further examination (if they wanted to) to get more information and fine details. Information would not be lost to interpretation as keys can be included, with all the nuances and explanation
Yes, I'm thinking schematics are a good place to start. They consists of small symbols, which can be understood easier with a key if needed. They form a larger picture, which some can understand immediately. Math works in a similar way. Imagine the effort needed to "do" math without numbers or mathematical symbols? Yes, the numerals are representations of abstract concepts, but it transcends language when a human learns how to use math. Abstract thought that would otherwise seem extraordinarily hard becomes concrete and possible. Even those doing math in their heads will see the symbols.
They have symbols which mean certain things and nothing else. In my system, any sort of symbol or quanta of information, would need precise measurements to be drawn. Using grid paper would facilitate this. If "Electricity" is represented with a certain shape, then it must be drawn to the precise relative dimensions. A key would elucidate any change in scale. This way, there can be no watering down of the symbols as people get lazy and do shorthand. There would be no "chinese effect." I suppose regular grid-lined paper would suffice originally, though a different type might be better suited for it. Still though, if the information takes up an entire sheet of paper, so be it. It would contains many sheets worth of data. I will begin experimenting with grids.
Receivers of a glyph would have the same feeling of "having it all" that one gets on an imageboard when they open a chan-created, dense infographic. It has everything you need to understand that person's idea or whatever they're saying or have compiled. Regardless of complexity, it is often nicer to digest than a wall of text in a post, which would be skimmed. I know for me, at least, there's a certain appeal to knowing everything is right in front of me.
I will post my prototypes in this thread. If someone wants to try the idea, providing the text version, a key and a glyph, however simple, we would instantly know what doesn't work. If the information isn't perfectly extracted, which we would be able to test based on their reference text, then adjustments would be made in ambiguous areas. I have the time to actually develop the system in theory, to which I could write a short book on. It would provide the groundwork for anyone to use the idea and create their own. The purpose would be for others to, following the "rules" that develop (such as always have precise measurements and keys), to create their own improvements. The "language" would evolve over time, though the glyphs would always be readable. There would be only more efficient methods of designing them, and more and more complex designs (to which intelligent people would benefit from). As a teacher, I have to stress that the student learns to whatever level they are presented or forced into. If you give them elementary material, they will only achieve around that level. They may be smart, but they will not reach the same heights they would when challenged.

Computers should be able to aid writing and editing two dimensional human languages and programming languages. Even if some are hard to do by hand it may be easier with a computer

Separating Information from Tone
Another obstacle that must be overcome is separation of information from tonality, and other parts of human expression. If we strictly look for data, we will miss out on what a speaker, or author is truly intending on saying. After all, if we convert a text to a DIG, it must have all intentions and data besides the words and letters. When converting a sentence:
We must not ignore the author's opinion. That is how things would become lost in translation. If we convert this to a DIG, and only state that someone named Johnny killed Sam, then we miss out on the speaker's additional information. This may be biased, it may be wrong, but the speaker's opinion cannot be discarded. Rather, another such inclusion is possible. The information that Johnny was a moron, and that was thought by the speaker, could be contained within the greater glyph, in a way that made sure the reader new it was author opinion and possibly false.

The more I think of schematics and grids, the more I think computers would be able to aid in the creation and analysis of such a system. It may not mitigate AI censorship, but the goal is then not obfuscation but transfer of information.

Look into shannons theory of information, that might give you some more info on the topic

I dabbled myself in the topic but different

I've made a toy program that encodes a number into a nice-looking glyph. Pics. related.

Semi-related:
These guys seem really big unto languages and encoding. A bit too much effort to be just a farce:
forgottenlanguages-full.forgottenlanguages.org/search?updated-max=2018-12-19T09:47:00+01:00&max-results=1

Attached: 212.png (718x684 10.78 KB, 7.54K)

Explain user. How are six big circles plus one small circle equal to 211?

I decreased the resolution and upped the contrast cause it's easier on an image-board.

I'll explain in a day or so. There's no fun in just telling right away. In the meantime, I'll post any number you ask.

Attached: 212.png (539x513 25.44 KB, 16.09K)

while this looks cool, it's a highly complex way to represent a three digit number, don't you think? I was thinking such a design would be more telling.

I think you're thinking about it wrong. It's probably something to do with factorization.
210 = 2 * 3 * 5 * 7
211 is prime
212 = 2^2 * 53

Yep, I see it now. 2 is at the top of the outer circle, hence the 2^2 being a circle with another circle at the top of it. The first 8 primes are probably equidistantly spaced on the outer circle.

That's an arbitrary digit positive integer, son.

correct.

Attached: 1000000.jpg (531x507, 41.88K)

Threw me off initially, but now I see the 5 circle is rotated 90 degrees.
Assuming a smaller circle corresponds to the next 8 primes, then the next 8 ad infinitum.

-1
0
1
2
3
4
8
16
32
64
256
512
1024
65535

Yep. That's basically it.

Attached: 58.png (715x693 7.37 KB, 8.25K)

Negative numbers don't work.
And apparently 3 images per post is the limit. And people already figured it out. I'll post these and the last three and call it a night.

Attached: 2.png (703x538 5.45 KB, 2.98K)

Attached: 65535.png (716x691 9.41 KB, 6.62K)

Also 65536 because it's pretty. You still need potentially infinite resolution if you want to represent an arbitrarily large amount of information. I guess pretty things are not necessarily the most practical. But they're still awesome.

Attached: 65536.png (538x470, 13.47K)

now this is some spicy autism
keep up the good work user

Have you read Story of Your Life? How does it compare to the movie? Can share pdf if anyone wants (~50 pages).

It does, sort of. In the story they point out that mathematical notation behaves this way. What's missing is a general notation for all human writing, for all human thought.

stenography has been with us for a long time.
It just needs a second pass to translate the shorthand to longhand.
So ship a small explanation with the shorthand and you're good to go.

Not everything must be overcomplicated.
It's like GPG with multiple recepients, instead of encrypting it for everyone in the recepient list, it gets encrypted to a generic dummy and the password to that is then encrypted for every recepient.

Your picture is about 14 kilobytes and wastes a lot of space.

65535

there, much more dense.

Of course not. You'll always need one of those, especially for "aliens" etc.
Intuition is partly learned, after all.

That's why I suggested sticking to shorthand/steno. that way your legend is also a clue to the system because it duplicates things.

For example.

Let's say your glyph is this ()
and then you write your legend
() |||||

Then the next glyph []
[] () ()

Then you write
[] [] ||||| ||||| ||||| |||||

Etc, things like that, intelligent beings would figure things out.

|
||
|||
||||
()
()|
()||
()|||
()||||
[]
[]|
[]||
[]|||
[]||||
[]()
et cetera

Yeah, but we made computers to display numbers like we read text because it's convenient (and how we're taught to read numbers)
A consequence is that we have to use a positional number system like we do now, or a tally-like system, like how I'd classify Roman numerals.
Heck, you could probably invent a positional number system where the digits are primes, but that sounds a great deal more recursive to implement, and would basically collapse into what he made in the first place.
It provides interesting number theoretical insights at a glance, unlike how positional systems neuter everything that isn't powers of the base.

Now, on implementing this in a really gay bit-level way, since primes combine with each other in an established way (multiplication), at the byte level (yes, still using 8 bits because fuck you), you could consider:
0000 0000 = STOP (or 1)0000 0001 = 20000 0010 = 30000 0011 = 5...0111 1111 = prime number 1271xxx xxxx = (reserve next x bytes; works additively if multiple in sequence)
Which is more or less how UTF-8 works.
Yeah, it's inefficient because I'm not considering how to represent the powers; not to mention that multiplication is commutative, but streams of bytes have order.
To borrow another term from the Unicode lexicon, a renderer can consider everything before a STOP as a composite character.
I spent far too much time thinking about this, and I have a lot of preemptive optimization thoughts, which is an indication that I should probably stop.

Is this fukcing Gallifreyan?

How can you have enough interest in tech to end up posting on this board but not understand basic information theory? If you condensed the entirety of a book into a single glyph on a medallion you'd need a microscope to read it. Sounds like you just really like the movie and want to larp it. Admittedly, it was a pretty good movie, but don't just cringelarp, learn some actual linguistics.

The alphabet is already a system of creating glyphs from a small set of unit symbols. Much easier than recognizing strokes in moonrunes, and unlike moonrunes you always know how a word is pronounced so you can verbally ask someone.

Maybe if you massively restricted the domain you could come up with a glyph system that's not utter shit. Emojis are like that. Granted emojis are still shit to an intelligent person, but for retards who use IM they are great because 90% of their "communication" is repeatedly stating the same handful of simplistic emotions anyway. So it makes sense to replace these common expressions with single symbols. But guess what, emojis are impractical as fuck without a dedicated input method, and even phone emoji keyboards are a mess to use when there's more than a few dozen emoji to choose from. Anyway, I bring up emojis because they're a great example of practical issues you'll have to solve.

I think moon pcs have an input method where you type the pronunciation and pick from autocompleted runes. If you pick standardized names for each emoji like :sad: (welcome to forums in 1999 lol) you could have the same approach. Or you could have a hierarchical menu system where you go statement->abstract->emotion->negative->sad->basic. But that's already as many keystrokes as just typing "sad". At least seeing 😞 is a split second faster than reading. As for designing the actual symbols the best rule of thumb is probably the more abstract they are the harder they are to guess, so rare glyphs should look like the thing they describe while common ones can get away with being squiggles. See: Everybody knows what :) is but wtf is

Also with schematics, you see the same principle of frequency vs. abstractness. Consider electronics: It's not quite obvious what ---/\/\/\/--- is, especially now that we live in an age of semiconductor resistors rather than literally long zigzag wires, but it doesn't matter because who doesn't remember it from seeing it all the time? Meanwhile something like transistor type is a lot less obvious if you haven't seen it before, but transistors are pretty common so people end up memorizing it easily. Now look at inductor with magnetic core: en.wikipedia.org/wiki/File:IEC_Inductor_with_magnetic_core.svg

With basic inductors, at least it looks like a spiral so you can tell it's something to do with a coil. Maybe you're smart enough to realize coils add a time delay so that's the purpose of the component. But with the mag.core inductor, good luck figuring out what it is from just the symbol. And since it's a relatively rare component, many people will need to look it up so this is a failure at effective symbology in my opinion. Admittedly real electronics engineers would have learned all of this in training, but since you can never learn all of a language, much less a secondary conlang, your case is similar to the clueless kid who only took circuits 101 and that's about it. That guy is better off just making a box on the circuit diagram and writing "inductor w/ mag core" than using this symbol.


Why would you do it for numbers? There's already a great system to represent numbers, invented by Arabs centuries ago. It is far superior to your circles. Numbers in general have lots of highly optimized representations from science and engineering. You don't have much value to squeeze out with glyphs. You should instead start designing glyphs for abstract concepts, such as emotional judgements, political/ideological concepts, software stacks and architecture. These are complex topics hard to get across in normal language but without a good pictorial/symbolic system. Stay way from anything math, philosophy, music, electronics or chemistry, they already have very specialized systems that work much better than you're likely to come up with.

Maybe phonetics could also be a decent domain. In IPA and systems like Pinyin, many letters are assigned at random so you have to memorize the alphabet first. You can come up with an alternative system of representing phonology where you can guess from each sound's symbol roughly how it's supposed to be produced.

There are a lot more distinct patterns in a room than there can be in a black-and-white glyph on paper.
In what way? How has the IT boom increased the amount of information that can be packed into a symbol on paper for practical human reading?
No, but if you let an optimization algorithm run for millenia in many parallel instances and find that they all end up with fairly similar results, there is a decent chance you've hit an equilibrium of some sort. Chinese symbols are completely conceptually different to the Latin alphabet, yet books written in either system will tend to take up a similar amount of space. In fact, just the fact that nobody is trying to squeeze more information in by shrinking font sizes to borderline unreadability should tell you where priorities lie in the practical aspects of getting some data from a page into your brain.

By increasing density you increase error probability, and decrease certainty. You need to focus more closely and spend more time reexamining potential errors. I think OP raises an interesting question, no doubt, and perhaps there is some room for improvement since before modern day part of the limitation was mechanical reproducibility, but I wouldn't expect anything too radical. The key point of interest is finding some rigorous approach to determining the limiting factors of information density in visual representations like these.

🌳🍒🌳🍒🌳🍒🌳🍒🌳🍒🌳🍒🌳

🌳🍒🌳🍒🌳🍒🌳🍒🌳🍒🌳🍒🌳

👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀
👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀👀


🤷‍♂️

I don't think that's the case anymore tbh 😜😜😜😍

✨✨✨😆😆😆🐱‍🚀🐱‍👓🐱‍🐉🐱‍💻🐱‍🏍✨✨✨

You're not gonna gain much by using phonetics. Phonetic writing systems exist (my own native language uses one, and the Latin alphabet was one originally). You will end up with much the same density unless you introduce way more sounds than people can remember or distinguish (another factor that has already been optimized for bt evolution). At which point it just becomes an excercise in rote memorization of symbols or parts of symbols like the electrical schematic shit.

I don't think it really matters what kind of information encoding you use, the limit is more about the practical limitations of human vision and visual input processing - make it smaller and reading takes longer/is harder. The more distinct the patterns are, the smaller they can be before they fade into uncertainty - but the fewer the patterns there are in total, the more distinct they can be.

Emojis are just pictograms. They make slightly different tradeoffs than phonetic systems.

But I'm aphantasic. Feels really bad.

I mean, it's not THAT hard. But you still have about as many different words in that post as unique emojis. Now try making a coherent post with 100 unique emojis. Not so hard, but you do have to install an emoji keyboard.


But that's literally it, user. Almost any child can be taught to read and write. But those agents are preselected through many aptitude tests and requirements like college degree/selective application process. They're an elite population and even so many still wash out or get assigned to other tasks because not everyone can do it.

If your goal is to make some sperg-tier language that brainlets can't into, great, but good luck with adoption.


I'm not talking about phonetic writing, but specifically IPA, which is used to precisely show pronunciation of foreign languages. Now the thing about IPA is it started out with just Euro languages, which are all pretty similar and already have an almost phonetic alphabet, so a lot of initial IPA symbols are just the same Latin letter, for example a makes an a sound. To someone familiar with any Latin alphabet they are very intuitive. So to use IPA for a language like English, Spanish, French, German, you hardly need any training at all.

But over time people started applying it to more distant languages and encountered sounds that were not present in any Euro language and have no Latin representation that's even close. So they had to invent new symbols, and they didn't do a great job of these. For example, good luck telling what sound ʡ is. At least with ŋ you may not know exactly what it is, but you can tell it's similar to n somehow. So if you want IPA for non-Euro languages, you have to memorize a lot of symbols. But they're not the easiest to memorize because the shape of the symbol has little correlation with the sound. Contrast with electronics symbols, which are much easier to learn (barring exceptions like my inductor example) because each symbol looks like an idealized version of the component. When you sort of remember what a symbol is, it's not hard to look at it and remember.

There are only so many sounds the human mouth can make, so you only need a handful of glyphs for an IPA replacement, from a few dozen to a few hundred depending on how detailed and extensive you have to be. Now I realize that Tolkien's elvish is a great example of an intuitive spelling system: Most consonants have 1-2 loops depending on whether they're voiced or not, and horizontal line indicates nasalization. So if you've seen m before, you can guess what the symbol for n is meant to be. OP could start by making a more universal system like this. But that's just one idea, not necessarily the best one.

It's built into chrome 😂😂😂😂🤣

This doesnt really have anything to do with your glyphs, but im sure you have seen one of those gifs that flash a text word for word on the screen to demonstrate how our wpm reading speed changes when we dont have to move our eyes. A system that made stuff like that practical for everyday use could dramatically increase your information digestion.
I imagine a program that rapidly shows you the words of a passage at a quick wpm and then gives you the option to review it or jump to the next one. Wouldnt be hard to automatically generate these gifs either. Comprehension would probably be pretty bad in the beginning, but im sure you would get better with some training.

Chinese (and Japanese) books are shorter than their English equivalents, a difference big enough you note at a glance between an English and Chinese version of the same text, but it isn't a factor of ten or higher like what OP imagines for his language.
Color.

722248

.i mi na jinvi lo du'u do djuno lo du'u lo lojbo cu banli

.i e'o ko cilre fi lo lojbau

install gentoo

.i ko kibycpa zoi gy. Gentoo gy.

bump

glyphs were a mistake

...

n-no u

oldbug it's you, ilu but you gotta fix your urbit shit, tired of these discontinuities

I remember some kind of website or browser addon that does this. You put in the link of a site you want to read and it rapidly goes through all the text on the page. I found it pretty mentally exhausting and only used it a few times.

I'll take QR Codes for $500, Alex.

Attached: Microsoft_QR_Codes.PNG (381x301, 147.67K)

The arrival glyphs represent a sentence, or a paragraph at most, not an entire book's worth of knowledge. The human brain can't process a books worth of knowledge in a few minutes, unless its extremely simplified.

Aim for being able to represent a sentence with a glyph. But then, you also have to make the glyph easier and faster to read amd write than an alphabetical sentence. Otherwise there is no advantage.

Honestly I doubt you could make a system better than modern written English. Maybe there could be a better language and alphabet, but that would be a difference of degree, not category. The further away you get from representing spoken language, the more unintuitive it will be, unless you use pictograms representing concepts. But then, they would take up more space and take more time to write.

I think we might actually already have what you're describing. They're called memes. If a picture speaks a thousand words, a good meme with a picture and some select words can speak ten thousand truths directly to the soul. But you can't hand-draw memes without taking ages. A book with just memes, all building on each other to convey the message and an understanding of the book, would be a very interesting concept to try. It could say as much as a book but in 1/10th of the pages.

A glyph represented by a 8x16 array of monochrome pixels needs 128 bits of memory, and if the the range of glyphs is limited to ASCII graphic characters (including character 0x20, i.e. space), then each glyph conveys only about 6.6 bits of information. How is that efficient, and where's the "redundancy" you are speaking of?

It's called spritz. I've found it pretty useful for reading reports at really high WPM, but problems arise when you need to blink. You can miss entire sentences in the blink of an eye.

We would need to go from alphabet glyphs to dotcode (something like QR codes) to make "glyphs" convey an amount of information that is closer to the amount of information necessary to represent them. However, would it be possible to teach humans reading dotcode efficiently and reliably? Using most of possible bitmaps representable in a given type of array as "glyphs" makes differentiating them from one another much more difficult - even hardware devices have trouble reading dense dotcode if the circumstances (lighting conditions etc.) are not optimal, even though they have no issue with processing them quickly.

there's your redundancy

stopped reading there

data loss is practically inevitable if reduction is your goal.
I think english is excellent tbh. there is alot we don't even know about english on average. can you even handwrite, bro? like cursive, flowing, connected, handwriting, with a pen? Dan Winter showed how all english handwritten characters can be created by viewing a 3d spiral (expanding outward and upward by the golden ratio) from different angles. that was lost in translation once we started using block print. Some jew wrote a book about it (hebrew and english both can be made from the spiral) and Dan shared the info for free because it's fundamental in our language and that knowledge shouldnt be locked behind a paywall. Dan even called the guy a jew. that jew sued Dan and stole his website. Dan Winter is a hero, that jew is a sheckle-grubber.
How many other mysteries of language exist unknown to most people, that would be totally lost if we move on to a new system before even truly understanding this one?


I'm glad somebody else showed a masonic trestle board, I can't upload graphics from TOR
masonicdictionary.com/trestleboard.html


No offense bro, you're trying something new and that's cool, but Binquadratetric is much more graceful than this circle jerk IMHO
gregapodaca.com/numerography/files/019.html
I think we should use Base65536 in modern computing & cryptography, and anybody who is sick of Arabic numerals could start learning BQT as a new system of notation. Although I like Base10, and it is not arbitrary (as Vortex Math shows it is inherent in nature) so we should not give up on Base10.. but base65536 can be much more informationally dense, and is beautiful, and is not arabic in origin (neither is base ten, just our most common form of writing in base10).

btw that could also be used as a base16 writing system using only 1 instead of 4 of those glyphs, then it's BQ not BQT (BinQuadratic rather than BinQuadraTetric)


"Darmak and Jalad at Tanagra" :-)

op, i think you are describing a painting. it's a massively parallel "glyph", not serial like traditional written language. it can tell an entire story, contained in a rectangular frame.

Attached: ilya-repin2.jpg (1797x1300, 681.48K)