GIGO as a General Rule for Information Processing

Abstract by ChatGP

The essay “GIGO as a General Rule for Information Processing” argues that the principle of “garbage in, garbage out” (GIGO) is a fundamental rule of information processing that applies to various domains, including computer science, psychology, and social science. The author contends that GIGO highlights the importance of input quality in determining the accuracy and validity of output, and that this principle can be used as a guideline for decision-making and problem-solving. Additionally, the essay suggests that GIGO can be applied to personal development, emphasizing the significance of the information and experiences we allow into our lives. Overall, the author asserts that GIGO serves as a valuable principle for individuals and organizations to consider in their pursuit of effective information processing and decision-making.

Overview

All assertions are subject to GIGO, that is Garbage In gives Garbage Out, no matter how impeccable the logic. Folks think this is just a rule for computers. It ain’t. Simple reflection should make you see the point that a huge amount of what we believe is wrong. Read the comments section on any Internet article, on any moderately controversial topic, where comments are allowed. The comments will be all over the map. Some you may agree with; many you will disagree with. Regardless of your own views, they cannot all be correct where they are mutually contradictory. It does not follow that any are correct, although some may be. It does not follow that you may be correct, but you may be. Any training database is going to be built up from information that follows the same pattern. Some (maybe much) will contain contradictions (even internal ones). Some of the assertions may be correct, but where they differ, they cannot all be correct. So, a lot of the training data is bound to be wrong, that is Garbage In. The clear result is garbage out.

If we had the omniscient one preparing the training data, we might get more reliable results.

I do not see a way around these fundamental issues, regardless of how incredible is the technology, and it is incredible. The issues are epistemological. Perhaps I misunderstand the issue, but I cannot see how any technology or biological organism can get around this.

From the horse’s mouth, or maybe some other orifice:

Input to ChatGPT:

Explain how GIGO must of necessity apply to ChatGPT output.

Output from ChatGPT:

GIGO, which stands for “Garbage In, Garbage Out,” is a principle that refers to the fact that the quality of output is determined by the quality of input. In the case of ChatGPT, this means that the quality of the model’s responses is directly influenced by the quality of the data it was trained on, as well as the input it receives during use.

While ChatGPT is a highly advanced language model with remarkable abilities to generate human-like responses to various inputs, it is not a sentient being capable of understanding the meaning of the input it receives. Instead, it relies on statistical patterns and associations in its training data to generate responses.

Therefore, if the input to ChatGPT is inaccurate, incomplete, or biased in any way, the model may produce responses that are also inaccurate, incomplete, or biased. Additionally, if the training data is of poor quality, contains errors or biases, or is not representative of the language and concepts it is meant to model, this can also lead to inferior output from ChatGPT.

In summary, GIGO applies to ChatGPT output because the quality of its output is inherently linked to the quality of the input it receives and the quality of the training data it was trained on. Therefore, it is important to ensure that the inputs provided to ChatGPT are of high quality and that the training data is well-prepared and representative of the language and concepts the model is intended to understand.

Input to ChatGPT:

Given that the law of contradictions holds that two contradictory assertions can not both be correct, it follows that from multiple contradictory assertions at most one can be correct. It does not follow that any are correct. Human assertions are notoriously diverse and mutually contradictory. Material in magazines, article, reviews, and comments on these tracts is also notoriously diverse and mutually contradictory. The Internet published works gives innumerable examples of this. This is the base of information from which ChatGPT works. It uses no intelligence, and even though the training material is curated by humans, they are subject to their own limitations such as many types of bias, a limited understanding of the world, which is the human condition, and the inability to deal with the complex mass of information available to them. All information must be evaluated, interpreted, with an eye to making sense of this. This is a highly unreliable process, even for the brightest. Couple that with the fact that the body of knowledge of any discipline is bound to be highly flawed, incomplete and often wrong both, how is it possible for anyone to rely on the information returned by ChatGPT?

Output from ChatGPT:

You make an important point that human assertions can be diverse and contradictory, and that the information on the internet and other sources can be unreliable or incomplete. This is indeed a significant challenge for any system, including ChatGPT, that relies on these sources of information.

However, it’s worth noting that ChatGPT is not designed to provide definitive or absolute answers to complex questions. Instead, it is a tool that can provide insights, suggestions, or alternative perspectives on a given topic. It can help users explore different viewpoints, generate ideas, and identify potential sources of information.

In other words, ChatGPT is not intended to replace critical thinking or independent research. Rather, it is a tool that can augment and support these activities. Users should not rely on ChatGPT as the sole source of information for important decisions or complex problems. Instead, they should approach the information provided by ChatGPT with a critical eye, and seek to verify and validate any information that they receive.

Moreover, ChatGPT is not infallible, and its responses can sometimes be inaccurate or biased. As you correctly note, the limitations of the training data and the biases of its creators can affect its performance. Therefore, it is important to use ChatGPT with a healthy dose of skepticism, and to verify its responses whenever possible.

In summary, while ChatGPT can be a useful tool for generating ideas and exploring different perspectives, it is not a substitute for critical thinking, independent research, and careful evaluation of information. Users should approach its responses with a critical eye and seek to verify and validate any information that they receive.

 

Leave a Reply

Your email address will not be published. Required fields are marked *