Wednesday 4 November 2020

Language 1

Language is always poetic. Words, and the arrangement of words, evoke associations, first through their sound, their meaning, their typical environment, but then also through the personal experience attached to them. Written language is poetic, but its poetic associations are of course different from spoken language—e.g., visual impression and location create additional associations—, and spoken and heard language is dfiferent from each other. and, to repeat, all language encountered is poetic because it is experienced by individuals who, themselves, are poetic.

"Set", as used by mathematicians, has own sound, its associations (to drawings, to numbers, to proofs, to problems, to extra-mathematical concepts like a tennis set); "Menge", the corresponding German terms, shares many of those; but also has a direct association to many things. Getting rid of associations that are deemed irrelevant is difficult, and specialists try to do it with definitions; but ultimately only succeed with Wittgenstein II, i.e., by training the "right" language games over and over and over again. One helpful method is to coin new terms: "homomorphic" or "autopoiesis". But extending the usages to other areas cannot be prohibited, and thereby new associations emerge and sink into the minds of writers, readers, speakers, and listeners.

Establishing a regime that evaluates use of language in a discipline is therefore "necessary," i.e., it happens. "Autopoiesis" has been used in juridical contexts, but never in court language. Homomorphisms can be drawn over into philosophy, but the language of cattle raising does not allow to use it, until today. Of course, delineating "court language", "philosophy" or "talking about cattle" are, in itself, enterprises that include language, but this is "just so". Langugage games appear to converge almost always, i.e., people are happy with pvercome traditions, thoughts, and uses of words. Thus, boundary transgressions are typically easily recognized; and the desire for constructive interaction, custom and, maybe, laziness will be enough to prevent them. If not, an element of power will come in.

However, if an enterprise like philosophy draws significant breath of life from such transgressions, sanctioning of unwanted boundary transgressions requires even more, and, at places, mainly power. Life is so much more than language—it is earning a living, seeing oneself accepted, being attracted by things, event or poeple, having possibilities; all this can be shaped or even revoked by power. And so, language, ultimately, because it is poetic, is also shaped by power, even in contexts where power itself is not the main game and goal: Which is to say, everywhere.

Still, nothing new here.

Sunday 1 November 2020

Rules 1

Humans can follow rules 100%—and they are the only animals capable of this. I do not (yet?) know why this is so; but it has something to do with what rules are, and what rules are about.

Take chess, as an example. A chess player, after having learned chess's rules—maybe as a child—, will always move a rook along rows or columns on the chess board when playing a game of chess. With someone playing 3 to maybe 20 games a day on average, that would make some 10 times 365 times 50 or almost 200,000 games: And most probably, the player will not slip on a single one of them. Even if a rook ends up in a wrong position one day, the opponent, or the player herself, will notive this, and the player will agree that the rule was different than what happened; and will correct the move.

Or take a writer, writing English texts in current times. He will write "the" always with these three letters. And even if he slips, he will agree if someone points it out; and correct it.

Here is a more complex story how ingrained rule-following is in humans. I live in Bavaria, near Munich, where we have a mass transit system covering about 5500 square kilometers, with integrated ticketing. On the line from Munich to Grafing, where I live, the ticketing border is jsut before Ostermünchen, from where the line continues to Rosenheim. This means that for a travel frm Munich to Grafing, you have to buy a transit ticket at a ticket vending machine; whereas if you travel to Ostermünchen or beyond, you can buy the ticket on the train. A regular ticket there is about 17 Euros, whereas the fee for travelling without a ticket anywhere in the transit system is 60 Euros.

One day, a lady jumps into the train just before its doors close. The conductor ask her for a ticket: In accented German, she tells her that being in a hurry, she couldn't get one a at the vending machine; could she now buy needs one to Grafing. The conductor explains that she cannot sell her a ticket to Grafing; this is only possible for destinations Ostermünchen and beyond. The lady insists that she needs to travel to Grafing. Again, the conductor explains the rules: "If you need a ticket to Ostermünchen, I can sell it to you; but not to Grafing". Again, the lady insists that she needs to go to Grafing. The conductor, with all of us listening, with a tiny shrug, tells her, that because of the lacking ticket, she has to fine her 60 Euros. The lady does not really understand what's going on; but after some discussion, she accepts that these are the rules—when, finally, another passenger interrupts: "Can't you sell her a ticket to Ostermünchen?" The conductor waits a little—then the lady says she does not want to go there. It is getting awkward. Again the passenger addresses the conductor—it is obvious the lady is overwhelmed: "Just sell her a ticket to Ostermünchen, please!" Without a pause, the conductor says: "Ok, I'll do this once for you. That would be 17 Euros." It appears very much that the lady still does not understand what's going on—but she now accepts that she has to go to Grafing with a ticket with destination Ostermünchen, costing her 17 Euros.

Obviously, the conductor did not want to bend the rules in front of all the other passengers: She had to adhere to what's written—travelling inside the transit system area without a ticket costs you 60 Euros; but she could sell tickets beyond the system's boundary. And, for what it's worth, also the lady could not bend the rules she had learned: If you want to go to destination X, you need to purchase a ticket to X. The "non-linearity" of these rules had led to a crash, hadn't the passengers been the catalyst that enabled an different reaction.

My initial thought was that the reason that we follow rules so perfectly is a social one: Playing together certain games requires following rules perfectly, otherwise games with many players, or with high stakes, or long-running games would quickly deteriorate and end. But much of our civilization seems to consist of such games.

But then I saw another lady sitting opposite me, doing a crossword puzzle. She followed the rules perfectly: Filling it out with wellknown German words that had identical letters at the crossing places, and were solutions to the hints given. Nobody forced her to do this. She could have written in any words she wanted, she could have stopped solving it: But no, she racked her brain to find one word after the other. So, it's not a social effect, or not a social effect alone. It seems that we have fun following rules: Solving a crossword, finding a mathematical proof to some unimportant formula, playing a piece of music exactly as it is written in the score.

How can we do that, and how can we agree that we do that (in some specific situation)?

 

Tuesday 27 October 2020

Induction 4

 An old thought: The extension of a proposition, and therefore also an implication, is True or False. But who cares for extensions? It's intensions that count against an inductive hypothesis.

Abstraction 1

There are no circular objects in the real world, and no rectangular (of course, one can give a new definition of "circular" that includes some, probably mathematical rules. That's not what I mean here; I mean "circular" as used by common people, or by people from say 10000 B.C.). But of course, we can and do label some objects as being (or appearing) circular, or rectangular, like the sun as seen in the sky, or the basement of my house looked down from above. Is this labelling a language thing, or something pre-language? As far as I know, some animals can reliable distinguish circles from rectangles, so it's a pre-language thing. Might it, then, not be something in nature itself, i.e., is the sun's image on the sky circular?

Question: Are there sharply distinguishable things in nature? Yes, there are: Liquids and rigid bodies are "just different". Of course, there are things like hot wax that are somehow both or in-between. But that doesn't mean that there aren't many things that are liquid; or that are rigid.

Being circular gets more and more cloudy if one looks at more and more details. Being a liquid doesn't—down to a microscopic level beyond the reach of non-scietnific exploration, it is always clear whether a patch of something is liquid, or rather the rigid container of the liquid. Similar to the wax example, in some situations, this isn't true: Water in a puddle is usually not clearly separatable from the non-water mud. But again, in many, and many typical, situations, the boundary is absolutely clear. So nature itself can create sharp macroscopically and easily observable boundaries.

However, for the moment, I go with the assumption that this is a coincidence. There are many more natural boundaries that are hard to understand. We know now that on Earth, the boundary between life and non-life for easily observable objects is very specific: Life is that which is equipped with DNA and a working energy supply, non-life is all the rest. This was not clear for thousands of years. Still, life and non-life were clearly separated in thought and language.

And we know that many observable species are actually separated from each other perfectly, i.e., they cannot create offspring. But the reasons for this are quite intricate (and therefore, some different species can create common offspring that is of a third kind). But is interesting that while mice and rats had their distinct names at all times, other similarly distinct species got common names and were mixed up by most people and languages (examples missing here).

But, as I said, I consider the (many) obvious natural boundaries just an input to evolution. And leave the examination whether this is ok open = open question 1.

The next question, then, is: Are all abstractions omparable to those we share with other species; which would, for example, mean that they need only little rational thinking, and no language at all. Or are many abstractions, in their way of being, evolutions only of human thought? I would like to start from a stance that also abstractions that we humans share with some other species (like being circular) are a coincidence, i.e., that I can argue about all abstractions as if they were human-only (like being married, or being king). But I have not yet argued that this is the case = open question 2.

So, for abstractions that are human-only (or, as said above, all abstractions viewed as human-only): It's obviously not a language-thing: There are abstractions for certain dance patterns, and certain musical ideas, and certain mathematical counter-examples even before anyone has a word for the respective abstractions. Reading treatises on music theory even from the 18th century, and listening to even a simple piece of Mozart shows that language lagged behind the actual knowledge of composers about abstractions lagged by millions of miles, so-to-speak.

And writing down, or even discussing, in language thinking about abstractions creates its own paths of thought, because language is , on the whole metaphorical. So the abstractions found in written documents, and even in discussions, are a mixture of absurd and agreeable ones.

But how, then, can one usefully characterize abstractions, and humans' use of them?


Friday 23 October 2020

Another, Not At All Original, Thought on Induction - Replace "Implies" With "Causes"


I don't really think the following works ... but: As another cure of the non-black-non-raven paradox, would it be possible to have a new logical operator "causes"? The difference to "implies" would be that no "ex falso quodlibet" would be allowed with "causes": A falsity cannot cause anything; and then of course the contrapositive would not work, and the pardox would vanish.

The truth table of "causes" would be interesting: "true causes true" could be both true or false, depending on whether the first truth actually causes the second. I wonder whether just leaving that open in propostional calculus still allows for some conclusions to be drawn; for at least "a causes b" and "b causes c" would have to imply "a causes c".

I am sure someone has already worked this through, and shown it to be problematical; and/or mapped it to some multi-valued logic, with corresponding results.

A Thought on Induction - the Non-Black-Non-Raven Paradox Might Not Be That Important

From pure propositional calculus, we of course have a → b ⇔ ¬b → ¬a. One of the problems with induction is the non-black-non-raven paradox: If we use, in an inductive reasoning process, an example of a black raven as "counting towards the consequence raven → black", then an example of a non-raven non-black thing should count towards the same conclusion—or should it? To me, this confuses the process of induction with the result of induction. Even if we assume the latter is the logical formula "∀ x: Raven(x) → Black(x)" (which is not at all clear: It might be that the result of induction is a form of weighted proposition, or an expectation, or a belief): Of course, from the result we can conclude "∀ x: ¬Black(x) → ¬Raven(x)"; but using the non-black non-raven example in the same way as the black raven is not at all required; and, I say, not even plausible. After all, this single example (or even a set of such examples) is no logical statement, but, at best, a ground instance of some as yet unclear logical statement(s). From the fact that we have an object 1 in the world that can be described as the combined proposition (non-raven1, non-black1) being true, we can trivially conclude that non-raven1 → non-black1 is true; as we can also conclude non-black1 → non-raven1, and, indeed, non-raven1: But all this, of course, does not tell us that all things are not ravens, or even that more than one thing is not a raven.

So my argument is: The input to "inductive reasoning process" (however that works) are facts; the output is some sort of (maybe nehanced) logical statement. And thus, the two are handled wildly differently.

Counterargument: Think about a sort of induction where the inputs are already logical statements (e.g. on subsets): Let's say we know that all A2 are B; and all A3 are B; and all A5 and all A7 and all A11 are B; etc. How does inductive reasoning let us conclude that all Ap, where p is prime, are B? If this is induction, then the "facts" above are just small logical assertions, e.g. "all raven1 are black" (there only being one raven1, but this is then a coincidence). It isnecessary to check whether this model of induction—which follows more closely deduction's black box, where both input and output are logical statements—is worthwhile to be studied

A Thought on Induction - Grue and Time

In some treatises on induction, time seems to be essential: Induction is presented as a process where, over time, more and more assertive examples are encountered, and therefore the inductive conclusion gets, in some way or the other, more and more certain. This underlies, it seems to be, for example Goodman's grue and bleen paradox.

Right now, I don't see why time is an essential and different property from all other properties. Induction is done on a set of examples: If I pull a hundred tiles from an urn containing thousands, and all rectangular are blue, then I could inductively assume that all rectangles are blue. Of course, this hinges on all those other aspects of induction that have to be discussed—but by itself, there is no need to pull the examples one after the other. But then, the grue problem disappears: If time is just another property of the input examples, either I pull only blue rectangles for t=now and t=then; and then the induction result is "rectangle implies blue". Or I pull blue rectangles for t=now and green ones for t=then, and then the result is "rectangle implies grue" (or was it bleen?).

Of course, there is at least one practical problems to sampling over different times: I cannot sample something from the future. But there are also many other sampling problems, and in scienes and humanities, we have to be inventive to get rid of these; whereas in practical life, we may just accept them and live with coarser inductive approximations. But this is not a problem of inductive reasoning per se. And one solution is always to go home for now and wait until t=then when we can sample better. So you might have to wait for a solar eclipse to get an example of what you are interested in.