Re: discussion on the various possible theories that may be applicable to LrL's
Real de Tayopa Tropical Tramp said:
sigh, such a fixation on lrl's.
We commonly see with our eyes, but how do we see and just what is it that we are seeing? Hint photons.
Don Jose de La mancha
Hi Mr. Don,
I don't normally post here, but you opened a discussion that could be interesing. I have read Marc's notice:
[size=14pt]
MOST IMPORTANTLY
TreasureNet is about treasure hunting, metal detecting and prospecting. We expect our members to post messages generally related to these main topics. While we do provide some more generic forums, like Favorite Music, the general subject of TreasureNet is treasure, and treasure hunting. Thank you for helping us keep our content on topic.
Since you are discouraging a fixation on LRLs, I must assume you really want to hear an answer about how we see things. In other words, this is not really a discussion about LRLs, but a discussion about the mechanism of seeing, which you will later spin into some far-stretched basis to prove that LRLs really work. I presume this would make it ok to talk about eyeballs instead of LRLs. If this is the case, prepare for a long answer that you are free to skip and ignore if it gets boring.
(skip to bottom to see short answer) --- where it says "
****** SKIP HERE FOR THE ANSWER *******"
How we see:
There are two parts of how we see. There is the physical part, and there is a mental part. Physically there is not so much of interest. It is all fairly well-known mechanics from the time photons (or waves for those who insist on wave theory) send energy that passes through a lens and projects an image on a focal sphere in your eye. There are light sensors on this focal sphere which are a little tricky... The central part has more light sensors that give better resolution than the peripheral sensors. But there are some more physical details, such as the part that is called the optic nerve from one eye is actually connected to the opposite side of your brain, so both eyes are wired from the opposite brain hemisphere than where we would think they are connected -- a strange arrangement. For what reason?
But it gets even more tricky... The light sensors and optic nerve are not really a nerve at all. These are brain cells that happen to be sensitive to light. These brain cells pushed forward to become part of a retina during some early pre-birth development. So there is not really a nerve interface, it is a direct connection into the brain. And to further complicate things, there are two basic kinds of light sensors. One group is very sensitive to even small amounts of light, but cannot make a distinction of what light frequency is being detected. These are the sensors on the retina that see in dim and dark scenes. Other light sensors on the retina can see various colors, but these colors must be falling on the sensor fairly strong in order to be detected. Moving on from the lens and light sensors, there is a huge amount of decoding (mental operations) that happen after light images project onto a retina, and even more decoding when from a pair of retinas. Researchers found that there are brain cells that look for lines within the field of vision from the densly packed array of light sensors in your retina. But this is done at the rear of your brain where the so-called optic nerve ends. In fact this area of the brain is one of the largest sensory centers. It seems vision is one of the most important senses we use for survival.
Without detailing all the mechanics, these are some of the vision functions that are handled by brain cells:
A. Stereo vision and depth perception to estimate distances.
B. Identifying scenes seen with the eyes that represent an immediate danger.
C. color vision interpretations to extract more information than could be seen in monochrome.
D. Special decoding to identify lines within the array of retinal pixels.
E. Edge detection functions,
F. Determining that it must be dark because all we see is monochrome.
G. Filling in missing details that don't exist, such as the "blind spot" image that is missing, or other more daring attempts to construct an image you really didn't see.
The list goes on, but these are all physical/mechanical functions performed by brain cells, which sometimes involve very sophisticated processing. We can note that these functions are automatic, and do not require any decision-making or exerting any effort in order for them to happen. Of course none of the brain circuitry which performs these decoding functions is very similar to man-made electronics. There are no raster graphic schemes in these biological chemical-electrical mechanisms. But they seem to do an efficient job which has yet to be matched entirely by a man-made machine.
Then we come to the second part of how we see.
This is where things get interesting. We leave the world of pure physical mechanics and enter the world of mind.
To start, the images that project on our retinas are dumb images. The raw data is only a very large collection data from brain cells which have been stimulated to different degrees by different light intensities. We have only a huge collection of signals carried by a bundle of long, skinny brain cells. By the time they are processed by brain cells at the back of the brain, we arrive at an image (2- dimensional light pattern) which can be compared to the opposing eye's light pattern to perform more mental processing and arrive at a 3-dimensional image. We start with a collection of light impulses from our eye sensors, and end up with a mental idea of a 3-dimensional object which is different than the light patterns that we actually saw. We are not actually seeing an object, we are interpreting images of light that came from the direction of the object to our eyes, to predict what the object is. We are usually good at predicting what the object is, which can be verified by walking to the object and testing it to see if it is indeed what we interpreted it to be when we saw it from a distance. But there are occasions when we make a bad prediction. For example, we could mistake a paper bag blowing across the road for a small dog, then step on the brakes only to find there was no need.
Of course not all people see the same. some people have physical variations in their eyes which will limit or enhance the information they can take in. And there are also qualitative variations. Some people may see colors tinted differently, or may have a better ability to recognize objects in the peripheral areas as others. In some cases, a deficiency in vision could cause corrective actions in the processing part of the brain which allows for more precise interpretations to compensate for the deficiency.
But what we see goes even farther than the optics and decoding of images. After we identify an object in our field of view, we make some further mental assessments to determine what does the object mean, and how to react to it. If the vision poses some kind of threat, then we determine a threat is in front of us, and we start to initiate protective measures which are decided by a different part of the brain. If not a threat, we may try to categorize the vision and decide how to deal with it. This all happens in a split second. After that moment passes, we will have decided one of the following things:
A. Threat: take protective action
B. Important situation for right now: take action to deal with important situation - This could be anything from an important business contact, to reading an interesting book.
C. Interesting thing for later: remember for future reference because priorities don't allow time now
D. unimportant: ignore as part of the scenery
The list could be expanded, but the point is the mental operations after the image is decoded include evaluating the object which was interpreted, and making a value decision for that object.
This is a lot of processing that is very sophisticated. How much money would a geek spend to build an equivalent robotic apparatus that does all of these things, including artificial intelligence on par with the level of discernment a human makes? Could a geek build a machine that could compete with a human at quickly recognizing visual patterns and dealing with them in real time at the speeds we do?
We remember that seeing allows us to determine some basic knowledge about an object like shape, color, brightness, distance etc. But wait.... those brain cells didn't spontaneously decide to give us all this information. Those cells were trained to give us this information... way back when we were less than a year old. We learned to see things by trial and error until we got it right. The same optic brain cells could just as easily have been taught to give us useless information about distance, color, danger etc. What we see physically is some light patterns that mean nothing until we learn ways to interpret them. After we learn to interpret what different light patterns mean, then we know what we are seeing, And the more experience we get in doing this, then the better we can see, with fewer occurrences of seeing something that we had figured out wrong. But even with these basic principles working, we should not forget that the majority of what we learn about seeing things is learned when we are less than a year old. The things we learn about how to see are not logical or scientific. We learn things that work and things that don't work. For example, a little kid sees a bright orange/yellow flame and feels his hand getting too hot by the fireplace, and he learns that real bright orange/yellow is bad. Maybe he will learn later it is not always bad, or maybe he won't learn that. Each person is different. But after years of learning, most people arrive at a similar experience and ability to interpret what they see.
****** SKIP HERE FOR THE ANSWER *******
So to answer your question of how do we see?
We see a collection of signals from brain cells which were stimulated by some light coming from our field of view.
We interpret these signals into light patterns if our eyes are in good condition. But these are useless images unless we apply further brain interpretation and experiential knowledge which comes from mind functions to these light patterns before we actually see something other than nonsensical images. My feeling is that even with perfectly working eyes, a person does not see anything until his mind is working to determine what these images mean. Then he could say he saw something. Otherwise you would see an endless series of light patterns that mean nothing and never will. Much like people who have tinnitus and hear an endless hissing in their ears that will never mean anything. They eventually conclude they are not hearing a real sound at all, and ignore it. I could probably show you some pretty dramatic illustrations to convince you about the importance of optical interpretations to define seeing if you were interested.
So there is one possible answer to your question.
Did I prove LRLs work?
Best wishes, Mr. Don.
J_P