==OH SHIT==

What I'm saying is that their ledger idea would fail to reach its actual potential without an AI capable of abstract reasoning. Right now they're limited to suggestions based on statistical analysis of big data and what they know about marketing. Throw a lot of uncertainty into the system, and abstract representations will fail to stay synchronized with the people they supposedly represent. That would result in a skewed perception making the idea almost worthless for what they aim to do.

Their example of acquiring a user's weight information shows limitations in their capability, as their "thought experiment" failed to take the concept a lot further. It also shows they were thinking too much like a marketer would. They suggest persuading the user to get a customized scale to acquire such data. But the truth is, there's ways to acquire such data without even the user being aware of it if you're creative enough.

That's where abstract reasoning comes in to compare many ledgers to build analogies to suggest how much that person might weigh and reason why that's the case. Is it because these person have these certain thoughts? Their environment and lifestyle? Their mental states? Medical conditions? These the kind of questions an AI would have to formulate and ask itself. It answers itself by comparing known habits, activity, relationships and patterns from similar ledgers where their weights are known. It then tries to find a way to test its answers by its understanding of gravity (it already learned prior to this scenario), so that if a scale measures weight, so could pressure being exerted on a tire of a vehicle maybe for example. Both concepts are connected at a particular abstract level, both measure the pressure on a surface of mass by gravity. Then using security footage it could attempt to figure out the weight of that person by how displaced the tire pressure appears to be when they enter their vehicle. Once it has acquired all that data on its own, it can confirm which analogies/answers it came up with made sense or worked, and keep them for future reference. It sounds far-fetched because it definitely is for now, but AIs with abstract reasoning and a mass surveillance network at its disposal, would be capable of this given a lifelong learning ability.

But to even begin on such a thing, it requires being able to discern how to represent abstracts into interpretative policies for meta-learning and vice versa.

TLDR; the only thing that would fuck up such an A.I is the introduction of new information or ways of proccessing that information that are completely new and unlike the waveform of other comparable functions, such as introducing religion that isn't the one world waveform religion. I say waveform because it requires people to, at a base level, be thinking exactly the same on a few things. Change that and it is completely foriegn to the A.I and causes unexpected errors.

very informative. thanks.

bump

More useful than the OP was. Thanks.

Umbrella Corporation - the advert.