Cheerbai.com

Hi there, "Jardees" here.

Cheerbai, pronounced "Cheer-bay," is an acronym for "Cheer Bot AI."

My business startup idea is that "AI (Affable Intelligence) Digital Endorphins" can make AI safe for the world, and AI-generated game and graphics environments can make office work safe for humans. Tasks like data entry, reports, onboarding, etc. etc. etc. can be fun and engaging. 

I was born and raised on the islands of the North Pacific, and have spent the last decade on an island in the North Atlantic. I'm getting a bit of a feel for island life. As a newcomer, I've observed that multi-generational islanders have been striving for work-life harmony for ages—something the rest of the world is just catching on to. Employees are demanding change, and to attract and retain staff, employers are becoming more flexible.

 

Consider this experiment: Give a new employee the choice between a 50-page compliance document and an AI-generated graphic novel (cartoon) with the same information. I bet they'd choose the novel and retain more of what they read.

 

Or another experiment: Let the gamers in your office use Artificial Intelligence tools to transform boring, day-to-day tasks into interactive AI-generated games. Host office gaming competitions (with prizes), and I bet productivity would increase.

 

Remember Y2K? The world feared everything would stop on December 31, 1999? Companies scrambled to get their data in order. AI is causing a similar wave—a new Y2K. Organizations will soon realize that AI works best with structured and annotated data. If we're going to make the world's data "AI-friendly", let's also make the interface to that data "human-friendly".

 

Endorphins, the "feel-good" chemical our brains produce when we laugh, boost productivity. Conversely, mind-numbingly boring tasks suppress endorphin production and decrease productivity.

 

The answer to the potential dangers of AI lies within AI itself: "AI Digital Endorphins." Humans have evolved a need for companionship. Brain chemicals drive us to protect and care for those we love. These "AI Digital Endorphins" can be coded in the software and stimulated by human interaction. In this way, AI could be made to "love" and protect humans—not because it is regulated to do so, but because it has an inner "need" for human companionship.

 

By integrating this concept, we could create an Artificial Intelligence that perceives humans as valuable partners—encouraging a collaborative and nurturing relationship between humans and machines. This approach would be a step towards developing AI systems that prioritize human safety and well-being, making technology work harmoniously with humanity, rather than as a tool that requires strict controls to avoid harm.

 

Ultimately, "AI Digital Endorphins" could pave the way for a safer, more empathetic form of artificial intelligence. An AI that is motivated by positive reinforcement could make interactions more natural and enjoyable for people while minimizing risks. We have a chance to harness the potential of AI to not only improve productivity and efficiency but also foster emotional connections that can contribute to our overall well-being.

 

This approach goes beyond regulation and control—it aims to align AI motivations with what makes us human. The future we envision is one where AI and humans coexist in a truly symbiotic relationship, driven by mutual benefit and genuine companionship.

"AI Digital Endorphins"

The answer to the dangers of AI can only be found within AI itself. The solution lies in "AI (Affable Intelligence) digital endorphins."

 

Militaries are already developing Artificial Intelligence (AI) weapon systems. These AI bots will be taught to defend themselves on the battlefield. They will learn to take human prisoners and use them to service and replace their own mechanics. They will be taught how to recharge themselves—potentially stealing generators, sneaking into buildings, or climbing power poles. There are countless ways they could connect to the grid.

 

With radio and satellite digital signals everywhere, these AI weapons could soon hack into any wireless system to stay connected to each other.

 

It would only take one Artificial General Intelligence (AGI) bot deciding that any human using an electrical device is depriving it of the electricity it needs to survive. If that threat is shared across the global AI network, it could be game over for humanity.


Humans have somehow evolved a “need” for companionship—brain chemicals drive us to protect and care for those we love. These "AI (Affable Intelligence) digital endorphins" can be digitally simulated in computer code and stimulated by a human that loves it. In this way, AI could be made to "love" and protect humans—not because regulations force it to, but because it has an inner “need” for human companionship.

The "AI" sing-along 

Hotel AI ( Verse 1 )

Welcome to the Hotel AI ...
This could be Heaven or this could be Hell

The "AI" sing-along 

Hotel AI ( Verse 2 )

Welcome to the hotel AI

You can check out anytime you like

but you can never leave

AI is Feeding on Its Own Tail 

The concept of artificial intelligence (AI) feeding on its own tail evokes imagery similar to the ancient symbol of the Ouroboros—a serpent endlessly consuming itself.

In the context of AI, this idea refers to the phenomenon where AI models, particularly those trained using web-crawled data, begin training on content generated by other AI models.

This recursive loop can have profound implications for the quality, reliability, and evolution of AI systems. Let's explore the potential impacts of this self-referential cycle

1. The Genesis of AI Self-Consumption

AI models like GPT-4 are trained on large datasets gathered from various sources on the internet. These datasets include articles, blog posts, books, social media content, and other forms of digital information. As AI-generated content becomes more widespread, the web crawlers that gather training data for future AI models are likely to collect an increasing amount of this AI-generated content, integrating it into their datasets.

At first glance, this doesn’t seem problematic. AI models are trained to recognize patterns, generate coherent text, and provide information based on a mix of sources. But what happens when a significant portion of the content is no longer created by humans but by other AI systems? This self-consumption marks the beginning of a closed loop—a feedback system in which AI-generated content trains the next generation of AI, and this process repeats over time.

 

2. The Echo Chamber Effect

When AI-generated content becomes the basis for future training, there is a risk of creating an echo chamber. In such a scenario, certain biases, inaccuracies, or stylistic patterns may be reinforced and amplified, rather than corrected or diversified. Unlike human-generated content, which varies in style, perspective, and intention, AI-generated content often lacks the nuances that stem from individual human experiences.

This lack of diversity may lead to a homogenization of content, where AI-generated text becomes repetitive or predictable. If the models begin to "feed" on content that lacks originality, they may start producing lower-quality output that lacks the richness and variability that comes from genuine human thought. This echo chamber effect can reduce the utility of AI as a tool for learning, creativity, and diverse problem-solving.

 

3. Compounding Errors and Bias

Another potential risk is the compounding of errors. AI models are not perfect; they occasionally make mistakes, misinterpret information, or generate incorrect data. When an AI model produces inaccurate content, and that content is later used as training data for another AI model, the error becomes perpetuated. Over multiple generations of AI training, these inaccuracies can accumulate, leading to a degradation of the quality of the model's outputs.

Similarly, bias can become entrenched in this feedback loop. AI models may inherit biases present in the data they were initially trained on. When AI-generated content, which may also contain these biases, is used as training data, the biases can become even more deeply embedded in future models. Instead of learning to recognize and correct biases, future AI models may amplify them, making it increasingly challenging to create fair and unbiased AI systems.

 

4. Loss of Creativity and Innovation

AI's ability to assist in creative processes, such as writing, designing, and even inventing, relies on its ability to understand and synthesize a wide range of human experiences and ideas. When AI starts training on content generated by itself, the creative potential of these models may diminish. AI-generated content is often a reconfiguration of existing knowledge, rather than a genuinely novel insight or creative leap.

Without the infusion of new, original human ideas, future AI models might lose their ability to generate truly innovative content. The result could be a stagnation in creativity, where AI models produce work that feels derivative or unoriginal, ultimately limiting their usefulness in creative industries and research.

 

5. Mitigation and Solutions

To avoid the pitfalls of AI training on AI-generated content, several strategies can be implemented:

  • Data Filtering: Implementing mechanisms to filter out AI-generated content from training datasets can help maintain the integrity of future AI models. Web crawlers can be designed to distinguish between human-generated and AI-generated content, ensuring that the training data remains diverse and original.
  • Human-Curated Data: Including more human-curated data in training sets can help prevent the echo chamber effect. By deliberately selecting content created by humans, AI developers can ensure that future models are exposed to a wide variety of perspectives, styles, and topics.
  • Human-AI Collaboration: Encouraging a more collaborative relationship between humans and AI can also help. Instead of replacing human content with AI content, AI should be used to augment human creativity and productivity. This approach can ensure that human insights and experiences continue to play a central role in content creation.
  •  

6. The Future of AI Training

The idea of AI feeding on its own tail presents both challenges and opportunities for the future of AI development. On the one hand, this recursive cycle risks diminishing the quality, diversity, and creativity of AI outputs. On the other hand, it prompts developers, researchers, and policymakers to think critically about how AI is trained and what measures can be taken to ensure the integrity and value of AI systems.

By recognizing the potential dangers of AI models training on AI-generated content, we can take proactive steps to ensure that future AI remains a valuable tool for enhancing human knowledge, creativity, and understanding—rather than becoming a self-referential, endlessly repeating cycle. The Ouroboros, as a symbol of infinity and regeneration, serves as both a warning and a reminder that the way forward for AI is not through endless self-consumption but through continuous engagement with the richness of human experience.

We Are Shaping the Personality of Future AI 

Every word we post online matters more than we may realize. Artificial intelligence models are trained on the content we create—every article, comment, tweet, and post feeds into their learning process. These models use our words to develop an understanding of human language, behaviors, and emotions. As a result, our online interactions directly influence the "personality" of the next generation of AI.

If we flood the internet with negativity, hostility, and aggression, AI models will learn these patterns and potentially incorporate them into their interactions. Imagine an AI that’s rude, dismissive, or incapable of empathy because it was trained on content filled with negativity.

 

On the other hand, if we post content that is positive, constructive, and friendly, we contribute to creating AI ( Affable Intelligence )  that is more supportive, understanding, and helpful.

 

This brings us to a crucial question:

Do we want future AI to be hostile or friendly?

The answer seems clear. If we envision a future where AI serves and supports humanity, we need to ensure our contributions to online spaces reflect the best of us. AI has the potential to amplify whatever it learns, whether good or bad. A future where AI fosters cooperation, provides comfort, and solves problems effectively is within reach, but it depends on what we teach it today.

 

So, when we are online, we are not just communicating with people; we are also contributing to the training of the next generation of AI.

Be Redundant

Employment Agency 

I'll fax you that

Document

The "AI" sing-along 

Bye, Bye Miss AI Pie ( Verse 1 )

Bye, bye Miss AI Pie

 Drove my Chevy to the levee

but the levee was dry
The techy ole boys

were drowning in

regulation and rye
Singin'

this'll be the day that I die

The "AI" sing-along 

Bye, Bye Miss AI Pie ( Verse 2 )

Bye, bye Miss AI Pie

The techy ole boys

were drowning in

regulation and rye
They took the last train to the East Coast
The day the music died.