![]() | We're back! Sorry, bad combo of sickness, funeral and a month-long trip abroad. The site is back now. ![]() |
Hugwis mental models
- This article won't be updated very frequently, but I'm working on it and intend to come back to it later. Hence, it is my wish that this article be kept. Thank you. (2025)
The Hugwis mental models (HOOG-wiss) is the underlying conceptual structure uniting all conlangs made by SN2. Hugwis is an acronym of all the conlangs I have planned.
Background
The underlying assumptions behind Hugwis are (a) an a priori conlang that is made from scratch reflects how the creator organizes and categorizes concepts mentally, (b) I can gain understanding of my own mental model by using itself to study itself, and (c) this mental model is relatively stable over time.
Since I was a kid, the way I speak has often confused people, even when I was trying to be normal. I have often felt a need to make up new words, to describe things better or just for fun. When I was a teenager I started to make up my own syntax too, like *"the flickering property of the lights" for "the lights keep flickering", or "than better one" for "better than the other". In time, I started to realize I organize concepts in a peculiar fashion, and I spent some time seeking the reason why.
One's way of modeling the world is mostly innate, with in-born constructs like objects, concepts, and actions. This intuitive process is then modified by what one learns later in life, including languages and scientific theories. In my case, the (very introductory) materials on logical fallacies, graph theory, and number theory I read on the Internet has likely influenced Hugwis.
The principles of Hugwis are redundancy, abstraction, systemization, chaos/ambiguity, preservation of detail, and flexibility/extensibility.
"Structs"
Basic structure
The basic unit of Hugwis is the concept. Information stored in the brain is thought to be encoded in a highly interconnected, mostly directional graph of concepts, and each concept is a node that links to other nodes. A concept may be simple, like the built-in concrete nouns, counting numbers, or complex, with its own tree structure. There are several types of attributes a complex concept hold:
- DEFAULT (usually hidden), list of concepts that link to and link from the current one;
- CORE attributes define the concept;
- EVAL attributes are evaluated from other attributes of the same concept;
- An attribute may have the value of (null), or nothing. The (null) indicates information that has been forgotten. The nothing is in predicates like "nothing can go faster than light", and is just an ordinary abstract concept that I intuitively understand, and need not be described with words. Here, the nothing does noth.
In the earliest iteration, a complex concept was described as a C-style struct. (I once fantasized about simulating my mind on my laptop.) Now, it is described with a formal grammar, and I may make a template to present them here with more ease.
Numbers
concept int, OR n_components (int), components[ digits[] ], (outgoing links: cultural meanings, error/status codes, and the like)
Consistent with existing research, Hugwis has a distinction between non-symbolic numerals, like "two" in "two apples", and symbolic numerals, like π or -3250. In numerical operations, numbers are seen as purely conceptual tokens that can be manipulated based on conventions/rules, which these rules are designed to describe features of the Universe, and are not arbitrary.
The two forms of certain numerals is linked to the non-symbolic/symbolic distinction. The number 20 is usually considered symbolic, since I can't glance at an unordered collection of 20 objects and derive the numeral immediately, but then an ordered array of 4-by-5 objects is non-symbolic. The first form is xoqukul "two_digits-two-zero", and the second form is cestogis "four-times-five".
WordAlphabet
length_estimate, morpheme_first, morpheme_last, components[], language_origin
In Hugwis, the most prominent feature of a word is its length: 1-2 very short; 3-4 short; 5-8 medium; 9-16 long; 17+ very long (inclusive). This is because when I try to recall an unfamiliar word, the first information I retrieve is often this estimate of its length.
Consistent with existing research, my brain may assign extra importance for the first and last elements in a list. I sometimes misspell words by switching two letters in the middle (metathesis), like *"hazadrous" and *"revelant", and at one time I often forgot trisyllabic laxing, like *"maintainance" and *"exclaimation", although I seldom misremember the start and end, even with silent letters like "ptosis" and "choux". In Hwnic, the existence of circumfixes is based on this focus on start and end.
As a rule, when thinking about a word, I typically do not attempt to split the entire word into individual morphemes: "uninformed" is "un-" + "informed", and "interest" is even encoded as a single component.
Action
agent, action, patient, properties[], ..., is_reflexive, is_agent_unknown, is_patient_unknown
Isn't "is ... unknown" an anti-pattern? In programming this may be true, but as a rule, boolean values in Hugwis default to false, and the attributes get a negation in the names.
The passive voice in Hugwis is simply an action with is_agent_unknown = true, so my conlangs often lack a separate passive voice. It's true is_reflexive seems redundant here, because a reflexive action can be represented by setting the agent and patient to the same, but this additional property may be an artifact introduced by language, namely the reflexive prefix like "self-".
"Processing"
Visual processing (not mapped)
Edge-detect and intersection-detect.
When viewing a flow chart, the nodes (intersections) are most important. With a map, I see intersections rather than roads and alleys. This is useful in many ways, because interesting places are often found at intersections, and I could better memorize roads which do not allow pedestrians to cross. Sometimes this impedes learning, as focusing on nodes can make me lose sight of shapes.
After using Bézier curves for animation and font-making, I even started to see some curved shapes as quadratic Bézier splines, though I can't yet work out waypoints for a random squiggle by a glance yet.
Concept-emotion distance
Words that are closely linked to certain emotions, like "death", "fight" and "localization", are processed differently. It's hard to determine how exactly so, though these words have a different "feel" to neutral nouns. I believe they are encoded to be both more intuitive and more precise, as opposed to intuitive but fuzzy words like "happy" and "sad", or precise but abstract words like "ontology" or "cyclohexane".
Dichotomies
All of my conlangs can be said to have two word classes. Even if I know many attributes of things exist on a sliding scale, I still see black and white, artificial and natural, dense and light, hard and soft, and other opposition pairs.
In Hwnic, this system is modernized to contain a "somewhere in between" and a "does not apply".
Design choices of Hwnic
Hwnic is named after the "window handle" type HWND in WinAPI. (It is not related to HWN Energy acquisitions, printer models, or the surname Hwang.) It is thought to have a precise structure like a programming language, as it arose out of a desire to limit the chaos/ambiguity aspect of my thinking. However, this is not upheld at all times now.