Law in Contemporary Society

View   r4  >  r3  ...
MoAliFirstEssay 4 - 30 Mar 2024 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"

The Irrationality of Rationalism?

Line: 9 to 9
  Prescriptivist constructions of language are fundamentally limited in that they fail to appreciate how language making and re-making are merely reflections of its core function as one of many tools used by humans to communicate. In practice, any proposed grammar of natural language must ultimately be a descriptive project of ex post analysis, useful only to the extent that it conforms, by design or by chance, with the underlying purpose of communication: to transmit information. Transmission may either be informative or ritualistic in nature, and of course, the transmitter must have some conscious, unconscious, or accidental reason for communicating. Thus, this process through which concepts dimly formed in thought are retrieved and imperfectly molded into phonemes and symbols needs as a prerequisite the psychosocial context that brings about and directs this externalization of man’s internal world. In other words, language construction is inseparable from its impetus.
Changed:
<
<
In natural language modeling, that “impetus” (if it can be called that) is identifying some underlying structure in the nth dimension of an embedding space. The embedding space is essentially a mapping of word vectors to other word vectors from the training set in which the relative distances between them should correlate to semantic and syntactic relationships among the words. This is achieved by expending incredible amounts of compute (3.14e+23 flops in the case of ChatGPT? ) to navigate changing representations of the embedding space following an optimization function and minimizing a loss function at each step. Such functions are statistical methods that can be as simple as linear regression (y=mx+b) or as overcomplex as a neural network. In truth, many of these “learning” methods can be replicated in standard statistical software like MATLAB, and in their traversal of data are just as “intelligent.” The difference between the past and today is not about the development of new theories, in terms of computational theory there may have actually been a ton of regression (pardon the pun). What has changed in the last few years is the concentration of compute power in a small number of corporate hands and the quantity of information available to them through their surveillance capitalism.
>
>
In natural language modeling, that “impetus” (if it can be called that) is identifying some underlying structure in the nth dimension of an embedding space. The embedding space is essentially a mapping of word vectors to other word vectors from the training set in which the relative distances between them should correlate to semantic and syntactic relationships among the words. This is achieved by expending incredible amounts of compute (3.14e+23 flops in the case of ChatGPT)

Is that 3.14 accidental?

to navigate changing representations of the embedding space following an optimization function and minimizing a loss function at each step. Such functions are statistical methods that can be as simple as linear regression (y=mx+b) or as overcomplex as a neural network. In truth, many of these “learning” methods can be replicated in standard statistical software like MATLAB,

And therefore GNU Octave. Software should be free as in freedom.

and in their traversal of data are just as “intelligent.” The difference between the past and today is not about the development of new theories, in terms of computational theory there may have actually been a ton of regression (pardon the pun). What has changed in the last few years is the concentration of compute power in a small number of corporate hands and the quantity of information available to them through their surveillance capitalism.

 Natural language processing is thus the apotheosis of linguistic prescriptivism. It is part and parcel of the Euro-American epistemic practice, found in all our studies, of reducing complex matters into oversimple representations based on a multitude of cultural assumptions and mistakenly working backward from there. This is enabled by a culture gripped with the theology of relentless formalism, a totalizing project that would collapse under the weight of one question: “why?” As Dickens laments in Hard Times:

Line: 27 to 41
 Jürgen Habermas, the German philosopher, warned in 1963 that, “an exclusively technical civilization … is threatened … by the splitting of human beings into two classes — the social engineers and the inmates of closed social institutions.” It is our epistemology of blind formalism that encourages the development of the sort of technologies that are neither practically sound nor socially uplifting. The technical development of so-called “A.I.” is a small window into the unwieldly artifice of assumptions and overgeneralizations that has provided the creative thrust of what passes for scientific development in our society. Taking a step back and considering how our commitments, our politics, and our history defines the metrics and modes of innovation is our only antidote to the irrationality of rationalism.
Added:
>
>
I'm grateful to have had a chance to read this. With the parts that I know a lot about, I agree too much to have much improvement to offer. With those of which I am profoundly ignorant, I gain the invaluable chance to approach the unknown by a familiar but challenging road.

Therefore I have only two substantive suggestions, one of each kind. The Chomskian argument about the limits to computational natural language processing should, if correct, imply some form of answer to the evolutionary question. So one place where your vast eclectic reach can be supplemented is, I think, Chomsky and Robert Berwick's Why Only US?

Taking this pathway into Sufi thinking, I am left again with a fresh view of an old puzzle: what it means in Islamic terms to know the law. It is made of words, in fact of a fixed corpus in a precisely-specified vocabulary, whose size is such that a child's memory can encompass all of it. (The superiority of actual human to purely artificial intelligence is how much more we can do with how much less in energy-budget and learning hardware terms.) But, as you show, knowledge of the End (which is the law, unless I truly misunderstand) is not knowledge of earthly affairs, and cannot be approached through logic and the senses, but only by what Holmes calls "a glimpse of the infinite." There must be a sentence or two more to wring from this that would resonate as the rest of your thinking does.

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

Revision 4r4 - 30 Mar 2024 - 15:30:44 - EbenMoglen
Revision 3r3 - 24 Feb 2024 - 03:44:40 - MoAli
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM