[x3d-public] Accessibility and semantics
yottzumm at gmail.com
Fri Jun 17 17:27:20 PDT 2022
I have been following Donald Hoffman, and recently saw a video with him and
Lex Fridman, where Donald was suggesting that space-time was the older
generations interpretation of what they were “seeing,” and there was
something more basic creating spacetime, perhaps sensory organs.
Today, I saw an interview with Jordan Peterson in Montreal where he said
that people saw meaning. I didn’t understand that, but my wife also said
she saw meaning.
I would state that i see color and perhaps depth, if not overlays of color.
This goes along with “fragment shaders.”
So we’re doing good work tackling the semantics of X3D. How does one
render meaning though, possibly without relying on words, geometry and
texture? Something that a deafblind person might have a clue about? I am
pretty clueless about that, probably because i have hypophantasia? Yes,
words can elicit multiple meanings, but what about shapes? How does one
convert a mesh to meaning without some form of intelligence? Is meaning
equivalent to function? Is meaning equivalent to tables?
My wife is a photographer, so obviously she wants to capture meaning in her
pictures. I believe that animators want to capture meaning, or something
that’s going on inside our heads, not necessarily something in what used to
be called space-time.
As I step into sign language and tactile sign language, i see even more
that seeing is meaning to some people.
I understand that different views can create different meanings, and the
word “view” may have multiple meanings.
So how about you? Do you have a frame of reference inside spacetime,
meaning, or color, or all three when you see? More? A picture is a
thousand words? Are we getting more complex than Einstein?
Your words are welcome!
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the x3d-public