Imbecile, look at the image.
A.I. still can't do basic bitch things
Actually it's about restricting to specific input nodes (rest are inhibited initially) to learn partial states of input. Then after it has processed that, it randomly inhibits different input nodes to repeat the process (doesn't need to be only input nodes, any node on any layer could be targeted). The idea is that the inhibitor feedback part is trained to find correlations between sets of partial data. Then eventually it learns how to purposely select which input nodes to inhibit next, upon exposure to certain data prior. Much like how a person's eyes will rapidly shift focus towards specific points of an image, based on what they initially recognize first.
You can immediately see the advantage here if it's practical to implement in deep learning. Because instead of needing to focus on literally everything in an image (all the time), it'll eventually learn where it should look and by how much, based on an initial patterns/states it notices first. More importantly, it'll learn what to ignore as well.
I'm looking. It has photographs selected as image search result type yet the dumb ass bing is displaying paintings.