Do we create systems of neurons that when fired, we get both the image and the information? If anyone has a resource I can look at to gaining a deeper understanding on the subject, I would love to see it. Or better yet, teach me how it works with a reliable source linked to it.
I’m not sure, but you might find some results if you search Google for phrases like “memory and spatial navigation”.
This is related and blow-your-mind interesting. Maybe thoughts—even abstract ones—have a spatial component. This is fun to think about along with method of loci…
There are no direct sources due to the lack of understanding of the brain even in this age.
There are some fundamental points which allow you to take 1 and 1 together to form 2 which is the point you may be after. Or models which attempt to extend a system beyond these points and remain accurate (often there will be an error at some point but up until that point the model is accurate).
Research papers and graduate level books are generally those to go after for such things.
Even this question has a vague yes as an answer,
can be both a no and yes.
In the principles of neural science ‘neuroscience bible’, there are pages to explain things relevant to this particularly based on models and data e.g :
From this image it is logical to say that using a memory palace or even image association invokes Associativity, if and only if the input or one of the inputs is strong. At the very least it will invoke Cooperativity.
Things are never that simple though, the brain is highly dynamic, there will be interference amongst other things, such as the timing between each input and also changes in amplitude, even the extent of LTP and its quality. You can view this as an under perfect conditions or a boundary requirement.
If you are not a pirate, amazon sells the newest edition for the principles of neural science (1761 pages) for about 80$.