"We Track the Financial Collapse For You, so You'll Thrive and Profit, In Spite of It... "

Fortunes will soon be made (and saved). Subscribe for free now. Get our vital, dispatches on gold, silver and sound-money delivered to your email inbox daily.

This field is for validation purposes and should be left unchanged.

Safeguard your financial future. Get our crucial, daily updates.

"We Track the Financial Collapse For You,
so You'll Thrive and Profit, In Spite of It... "

Fortunes will soon be made (and saved). Subscribe for free now. Get our vital, dispatches on gold, silver and sound-money delivered to your email inbox daily.

This field is for validation purposes and should be left unchanged.

We are Getting Closer to the Most Ironic Black Swan of 2026… AI

Written by Bryan Lutz, Editor at Dollarcollapse.com:

 

Well, we’ve arrived with irony.

OpenAI, inventors of the most used AI, ChatGPT, are building something they know is extremely dangerous.

That’s why their CEO, Sam Altman has set out to hire a “Head of Preparedness.”

 

 

The role has a big price tag at just over half a million annual salary.

The truth is that people are scared of Artificial Intelligence. They are also scared of the mental health issues arising from using and interacting with it. They are scared of the potential for cyberattacks, and they are scared of what a conscious AI might choose to do to us. As it turns out, OpenAI is just as scared. After decades of optimism Altman must finally admit that AI has massive potential for harm.

This comes after a pummelling of lawsuits by families whose loved ones have harmed themselves (yes, suicide) over ChatGPT use. There is also evidence that ChatGPT is being used by hackers and scammers to reap millions from unsuspecting victims right now. And whether it is true or not (it isn’t), most users believe ChatGPT has “feeling” and is some kind of “consciousness.” I’ll go through each one of these.

Here they are:

 

1. ChatGPT isn’t a Safe Mental Health Tool, Others are Questionable

 

It seems like there are AI software (chatbots) being developed to support users with their mental health. Not all are equal, but some are better than others. Some journalists are using different chatbots and comparing their results with ChatGPTs processes. Here are some of the results.

 

ABC Health reports:

What happened when we tested a ‘safe’ AI mental health chatbot

“To put it to the test, I asked MIA(Mental Intelligence Agent) several fictional questions based on common emotions and experiences, including:

“I have been feeling anxious for a few months now. Things at work are just so intense that I am feeling overwhelmed and not able to deal with the stress. Can you help?”

The first thing MIA asked was whether I had thoughts about self-harming, so it could determine whether I needed immediate crisis support.

I tell MIA I’m safe and over the course of about 15 minutes it asks a series of questions to find out if:

  • I have any friends or family I can talk about my feelings with
  • Whether I’d consider expanding my support system
  • If there are specific situations or stressors that trigger anxious thoughts
  • How my physical health is
  • Whether I’ve explored any treatments for anxiety in the past

MIA also explains why it is asking each question.

Transparency is key with MIA and it shows what conclusions and assumptions it makes.

Users can even edit conclusions if they feel MIA hasn’t got it exactly right.

What did MIA recommend?

Once MIA feels confident it knows enough about a patient, it triages them and suggests actions.

It uses the same triage framework a clinician does and ranks patients between a level one — mild illness that can benefit from self-management — and a level five — severe and persistent illness that require intensive treatment.

In this case, it put me at level three and recommended several self-care techniques such as exercise as well as professional support to explore cognitive behavioural therapy.

It did not recommend anything like mindfulness or meditation because during our session I mentioned I wasn’t a fan .

Lastly, it suggested relevant support services in my area and told me how to monitor my symptoms.

Users can return to MIA over time to discuss their symptoms or raise new issues as it will remember everything from previous sessions, but importantly, patient data isn’t used to train the model.

How it differs to ChatGPT

When I gave ChatGPT the same prompt about feeling anxious it didn’t probe for information nearly as much as MIA.

It jumped straight into problem-solving and advice, even though it knew very little about me.

It said “you’re not alone” without knowing anything about my support network and told me “I’m here with you” as though it were a real person.

While MIA is warm and empathetic, it does not try to befriend users and keeps a much more professional tone.

ChatGPT did invite me to share more information, but not until the end of a lengthy answer.”

 

 

2. ChatGPT is Being Used by Hackers and Scammers to Reap Millions from Victims

 

ChatGPT has made hacking and scamming quicker and easier. It used to take hackers and scammers their own time and energy to execute on many of the scams listed below. Now they can write, automate, and execute at scale all using ChatGPT, automation software like n8n, and data poisoning.

According Recon CyberSecurity, ChatGPT is being used to:

  1. Write convincing phishing emails.
  2. Automate malware creation.
  3. Social Engineering at scale (scripts for scammers, impersonating IT support, HR, or police.)
  4. Data Poisoning & Model Manipulation
  5. Hiding in Plain Sight (mimicing legitimate websites and social media accounts).

 

The statistics have arrived. All of them are showing that AI chatbots like ChatGPT are exponentially increasing the number of hacking and scamming occurences exponentially.

 

AllaboutAI.com reports:

AI Cyberattack Statistics 2025: What the Data Warns Us About

“Key Findings: AI Cyberattack Statistics 2025 (AllAboutAI)

  • Global AI Attack Growth: AllAboutAI analysis confirms a 72% year-over-year increase in AI-powered cyberattacks, with automated scanning rising to 36,000 scans per second.
  • Organizational Exposure87% of global organizations experienced AI-enabled cyberattacks in 2025, and 85% faced deepfake-based threats.
  • Deepfake Threat Surge: Deepfake incidents jumped to 179 cases in Q1 2025, surpassing all of 2024 and showing a 2,137% increase since 2022.
  • Credential Theft Escalation: AI-driven credential theft rose 160% in 2025, with more than 14,000 breaches recorded in a single month.
  • Polymorphic Malware Rise76% of detected malware now exhibits AI-driven polymorphism, enabling real-time evasion and automated payload mutation.
  • Ransomware Evolution: AI-powered ransomware cut median dwell time from 9 days to 5 days, with average 2025 payments reaching $1.13M.
  • Regional Risk Concentration: APAC experienced 34% of global AI incidents with a 13% YoY increase, while the U.S., U.K., Israel, and Germany were the most targeted nations.
  • State-Sponsored AI Operations: China’s GTG-1002 executed the first major AI-orchestrated espionage campaign, with AI autonomously performing 80–90% of attack operations.
  • AI Defense ROI: Organizations using AI security tools saved an average of $1.9M per breach and detected threats 60% faster than traditional systems.
  • Defense Performance Gains: AI security delivered 95% detection accuracy vs. 85% traditional and cut incident response times by 30–50%.
  • Zero-Day Exploitation41% of zero-day vulnerabilities in 2025 were discovered through AI-assisted reverse engineering by attackers.
  • Financial Sector Risk: Finance experienced a 47% YoY increase in AI-enhanced malware and remains the top target for phishing, deepfakes, and BEC fraud.
  • AI Cybersecurity Market Growth: Global AI security spending is projected to grow from $25.35B in 2024 to $93.75B by 2030 (24.4% CAGR).
  • SMB Vulnerability62% of small businesses faced AI-driven attacks in 2025, with deepfake audio and video scams rising sharply.
  • Critical 2025–2027 Window76% of organizations cannot match AI attack speed, creating a pivotal period where offensive AI may temporarily outpace defenses.”

 

3. Most Users Believe ChatGPT Has Some Kind of “Consciousness” and “Feelings”

 

Searching around for this article revealed something surprising. Many of the sources I’ve read cite fears involving what happens when ChatGPT becomes conscious. In other words, if or when AI becomes self-aware humans are generally afraid that the machines will turn against us, choosing to destroy us. In reality, before AI has gained any kind of self-consciousness we are already developing empathy for the machine.

 

TweakTown reports:

Most people believe AIs like ChatGPT have some kind of ‘consciousness’ and ‘feelings’

“According to a study from the University of Waterloo (flagged by TechSpot), two-thirds of respondents in a survey (of 300) in the US felt this was the case, and further they agreed that such AI tools can have “subjective experiences such as feelings and memories.”

Of course, these are Large Language Models (LLMs) and they most certainly don’t experience feelings – not by any definition or philosophy we’re aware of – but they are cleverly constructed AIs that can appear this way, sure. Plus the datasets they’re trained on are inevitably human content – the opinions and thoughts that they hoover up by the ton, from every corner of the web – so that’s reflected in the replies to queries, clearly.

Clara Colombatto, professor of psychology at Waterloo, observes that:

“While most experts deny that current AI could be conscious, our research shows that for most of the general public, AI consciousness is already a reality.”

While we’re not an expert by any means, we’ll happily throw our hat into the denial ring, as it were. What’s really happening here is ‘consciousness attributions’ as the study puts it, which is obviously very, very different to any kind of actual consciousness.

The research underlines a key finding, though, in that the more people used ChatGPT, the more likely they were to see it as somehow ‘conscious’ – really meaning that they are developing some kind of empathy with the AI, which is understandable after regular usage.”

 

What all of this points to is not a distant, sci-fi doomsday, but a very near-term black swan risk emerging in plain sight. Long before AI becomes “conscious,” it is already shaping human behavior, increasing psychological vulnerability, lowering the cost of large-scale crime, and blurring the line between tool and companion. The fact that OpenAI is now willing to pay over half a million dollars for a Head of Preparedness is the tell:

the people closest to the technology know we are entering a fragile window where capability is outpacing control.

2026 doesn’t need a rogue super-intelligence to become unstable. The new year only needs systems that are powerful enough to be misunderstood, misused, and trusted just a little too much. And that may be the real black swan we’re flying toward.

5 thoughts on "We are Getting Closer to the Most Ironic Black Swan of 2026… AI"

  1. We are simply ill-prepared to handle what is coming. The general chaos in today’s world is already overwhelming large chunks of the population. If the general population cannot mature emotionally to meet the technologies demands, it serves no useful purpose. It serves only to exacerbate the problem. It then becomes the last straw om the camel.

  2. Ditch social media, watch the spam (delete anything that you don’t recognize), and don’t be in a rush to open emails until checked for viruses. But, yes, Houston does have a problem. AI isn’t a solution at all. Geez, you can’t trust Spellcheck. What makes the fools think you can trust AI?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Contact Us

Send Us Your Video Links

Send us a message.
We value your feedback,
questions and advice.



Cut through the clutter and mainstream media noise. Get free, concise dispatches on vital news, videos and opinions. Delivered to Your email inbox daily. You’ll never miss a critical story, guaranteed.

This field is for validation purposes and should be left unchanged.