Law in Contemporary Society

View   r3  >  r2  ...
MoAliFirstEssay 3 - 24 Feb 2024 - Main.MoAli
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"

The Irrationality of Rationalism?

-- By MoAli - 20 Feb 2024

Changed:
<
<
Natural language modeling is facing a serious problem. It is not the massive environmental cost of training nor the dubious provenance of underlying data that renders this scheme essentially unworkable, it is rather the fact that deteterministic representations of language cannot meaningfully reverse-engineer human communication.
>
>
Natural language modeling is facing a serious problem. It is not the massive environmental cost of training nor the dubious provenance of underlying data that renders this scheme essentially unworkable, it is rather the fact that deterministic representations of language cannot meaningfully reverse-engineer human communication.
  Prescriptivist constructions of language are fundamentally limited in that they fail to appreciate how language making and re-making are merely reflections of its core function as one of many tools used by humans to communicate. In practice, any proposed grammar of natural language must ultimately be a descriptive project of ex post analysis, useful only to the extent that it conforms, by design or by chance, with the underlying purpose of communication: to transmit information. Transmission may either be informative or ritualistic in nature, and of course, the transmitter must have some conscious, unconscious, or accidental reason for communicating. Thus, this process through which concepts dimly formed in thought are retrieved and imperfectly molded into phonemes and symbols needs as a prerequisite the psychosocial context that brings about and directs this externalization of man’s internal world. In other words, language construction is inseparable from its impetus.
Changed:
<
<
In natural language modeling, that “impetus” (if it can be called that) is identifying some underlying structure in the nth dimension of an embedding space. The embedding space is essentially a mapping of word vectors to other word vectors from the training set in which the relative distances between them should correlate to semantic and syntactic relationships among the words. This is achieved by expending incredible amounts of compute (3.14e+23 flops in the case of ChatGPT? ) to navigate changing representations of the embedding space following an optimization function and minimizing a loss function with each prediction. Such functions are statistical methods that can be as simple as linear regression (y=mx+b) or as overcomplex as a neural network. In truth, many of these “learning” methods can be replicated in standard statistical software like MATLAB, and in their traversal of data are just as “intelligent.” The difference between the past and today is not about the development of new theories, in terms of computational theory there may have actually been a ton of regression (pardon the pun). What has changed in the last few years is the concentration of compute power in a small number of corporate hands and the quantity of information available to them through their surveillance capitalism.
>
>
In natural language modeling, that “impetus” (if it can be called that) is identifying some underlying structure in the nth dimension of an embedding space. The embedding space is essentially a mapping of word vectors to other word vectors from the training set in which the relative distances between them should correlate to semantic and syntactic relationships among the words. This is achieved by expending incredible amounts of compute (3.14e+23 flops in the case of ChatGPT? ) to navigate changing representations of the embedding space following an optimization function and minimizing a loss function at each step. Such functions are statistical methods that can be as simple as linear regression (y=mx+b) or as overcomplex as a neural network. In truth, many of these “learning” methods can be replicated in standard statistical software like MATLAB, and in their traversal of data are just as “intelligent.” The difference between the past and today is not about the development of new theories, in terms of computational theory there may have actually been a ton of regression (pardon the pun). What has changed in the last few years is the concentration of compute power in a small number of corporate hands and the quantity of information available to them through their surveillance capitalism.
 Natural language processing is thus the apotheosis of linguistic prescriptivism. It is part and parcel of the Euro-American epistemic practice, found in all our studies, of reducing complex matters into oversimple representations based on a multitude of cultural assumptions and mistakenly working backward from there. This is enabled by a culture gripped with the theology of relentless formalism, a totalizing project that would collapse under the weight of one question: “why?” As Dickens laments in Hard Times:


Revision 3r3 - 24 Feb 2024 - 03:44:40 - MoAli
Revision 2r2 - 23 Feb 2024 - 22:49:10 - MoAli
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM