A Reverb on Ground News, AI, influencer trust, and the inner system no app can build for us
When I first heard about Ground News, my first reaction was not suspicion.
It was relief.
Finally, I thought.
A tool that could help people see how different outlets frame the same story. A tool that could show the left, the right, and the middle. A tool that might help independent voters, exhausted citizens, and politically overwhelmed people step outside the endless ping-pong match of my side says this, your side says that.
At first glance, it sounded like something this country desperately needs.
Because we do need better media literacy.
We do need to understand framing.
We do need to see how one story can become ten different emotional experiences depending on who writes the headline, who owns the outlet, who funds the platform, and who benefits from our reaction.
So, no, I did not immediately look at Ground News and think, that sounds terrible.
I thought the opposite.
I thought, that sounds useful.
Then I watched Parkrose Permaculture’s video, “Big Dem influencers get dragged for promoting AI biz.”
And the conversation in my household changed.
Because Parkrose was not simply saying creators should never take sponsorships. She was pointing at something bigger: influencer trust, AI-generated summaries, paid access to “clarity,” and the uncomfortable reality that even tools promising to help us see bias can carry their own bias, incentives, and blind spots. In the transcript, Parkrose questions creators promoting Ground News, the lack of consistent ethical standards for social media influencers, and the use of AI-generated news aggregation as a substitute for reading actual journalism.
That is where this gets uncomfortable.
And honestly?
It should.
The Problem Ground News Is Trying to Solve Is Real
Let me be clear: the problem Ground News is responding to is real.
American media is a mess.
Too many people are trapped inside information ecosystems that constantly reassure them they are right. Too many headlines are engineered to make us angry before they make us informed. Too many outlets blur the line between reporting and entertainment, then hide behind that blur when accountability shows up.
We have corporate media consolidation, billionaire-controlled platforms, algorithms built to reward outrage, and influencers acting like journalists without always following journalistic standards.
So I understand the appeal of Ground News.
I really do.
The idea is seductive: put the coverage in one place, label the bias, show the blind spots, reveal the framing, and help people see the bigger picture.
That sounds responsible.
That sounds efficient.
That sounds like a shortcut to media literacy.
But that is exactly where we have to slow down.
Because media literacy is not just the ability to compare headlines.
Media literacy is the ability to ask better questions.
That is the real discomfort here: the commercialization of discernment.
It is one thing to sell a product.
It is another thing to sell the lens through which people are supposed to understand every other product, headline, platform, political claim, and truth.
And no app can do that part for us.
No System Will Save Us If Our Own System Is Broken
This is the part I keep coming back to:
No outside system is going to save us if we refuse to build our own inner system of discernment.
Not Ground News.
Not ChatGPT.
Not Grok.
Not Google.
Not TikTok.
Not YouTube.
Not legacy media.
Not independent media.
Not left media.
Not right media.
Not the so-called middle.
A tool can help us sort information, but it cannot make us honest.
A tool can show us multiple perspectives, but it cannot force us to consider them fairly.
A tool can label something “left,” “right,” or “center,” but it cannot tell us whether the center itself has shifted.
And that matters.
Because what happens if everyone starts trusting the “middle,” but the middle keeps moving?
What happens if the so-called center is not actually truth, but simply the average point between two distorted narratives?
That is false equivalence dressed up as balance. It assumes both sides are equally anchored in reality, even when one side may be working from evidence and the other may be working from fantasy, propaganda, or grievance.
What happens if one side is saying the sky is blue, another side is saying the sky is made of cheese, and the “balanced” middle tells us the sky is probably dairy-adjacent?
That is not objectivity.
That is mathematical nonsense dressed up as fairness.
Truth is not always found halfway between two claims.
Sometimes one side is closer to reality.
Sometimes both sides are missing the point.
Sometimes the framing itself is the trap.
That is why we cannot outsource discernment.
We can use tools.
We can appreciate tools.
We can even love tools.
But we cannot surrender our thinking to them.
Ground News May Show the Spectrum, But People Still Choose Their Comfort Zone
This is one of the biggest concerns I have now.
Even if Ground News works exactly as advertised, what happens when people use it the way people often use information?
What happens when someone opens the app, sees the left-wing framing, the right-wing framing, and the center framing, then simply clicks the version that already feels most familiar?
What happens when the tool shows them a wildly different version of the same story, but instead of becoming curious, they become defensive?
What happens when they look at the opposing headline and think, See? They are twisting it again.
Then what was accomplished?
At that point, Ground News is not breaking the echo chamber.
It is giving the echo chamber a dashboard.
And that is where psychology matters.
Human beings are not neutral processors of information. We like to think we are, but we are not. We bring our fears, identities, loyalties, wounds, assumptions, and emotional investments into everything we read.
If a story confirms what we already believe, it feels clean.
If a story challenges what we believe, it feels suspicious.
That is confirmation bias.
That is cognitive dissonance.
That is identity protection.
And it does not disappear because an app put three headlines beside each other.
In fact, the opposite can happen.
There is a kind of moral licensing that can occur once a person has looked at “both sides.” They may feel they have done the responsible thing. They may feel more objective, more informed, more careful.
But if all they did was glance at the other side before returning to the version they already preferred, the tool did not challenge their bias.
It certified it.
That may be one of the most dangerous illusions of all.
Not bias.
The belief that we have overcome bias because we looked at bias labels.
Maybe that is the deeper discomfort here. We are not only looking for shortcuts to information. We are looking for shortcuts to virtue. We want to feel careful, informed, balanced, and responsible without always doing the difficult inner work those qualities require.
But discernment is not a badge we earn by opening the right app.
It is a practice.
The Work We Cannot Outsource
This is where the Ground News conversation becomes bigger than Ground News.
Because yes, a tool can sort headlines.
A tool can compare outlets.
A tool can show us how different political camps are framing the same story.
But a tool cannot make us thoughtful.
That part is still ours.
If we want to become harder to manipulate, we have to build our own internal analytical engine. Not because every person has time to become a full-time journalist, but because every person living inside a democracy has a responsibility to ask better questions before surrendering their worldview to a headline, influencer, algorithm, or app.
What I am really talking about is thinking about our thinking.
Not just asking, “What does this article say?”
But asking:
“Why did I react that way?”
“Why did this headline make me angry?”
“Why did I want this to be true?”
“Who benefits from my reaction?”
That starts with stripping away emotional language.
When an article tells me a speech was “disastrous,” “shocking,” “heroic,” or “inflammatory,” I have to pause and ask: what actually happened?
If I remove the adjectives, what is left?
Did someone make a speech?
Sign a bill?
File a lawsuit?
Change a policy?
Say a specific sentence?
The raw event matters before the interpretation.
Then I have to look for the strongest opposing argument, not the easiest one to mock. That is the difference between a straw man and a steel man.
If my only understanding of the other side is “they are stupid,” then I have not done the work yet.
I may still disagree.
I may disagree fiercely.
But disagreement without understanding is easy to manipulate.
I also have to reverse the lead.
Headlines are designed to pull me somewhere. Sometimes they inform. Sometimes they provoke. Sometimes they quietly decide what I should feel before I have even read the story.
So I have to ask:
If I started at the bottom of the article, with the data, the quotes, the filing, the transcript, or the actual policy language, would I come to the same conclusion the headline wants me to reach?
That is where primary sources matter.
If the story is about a court case, read the filing if you can.
If it is about a speech, find the transcript.
If it is about a bill, look for the actual text.
Most spin happens in the space between what occurred and how someone summarizes it for us. The closer we get to the source, the less room there is for someone else to decorate reality on our behalf.
And then there is the old question:
Who benefits?
Who benefits from me believing this?
Who benefits from me being afraid?
Who benefits from me being angry?
Who benefits from me giving up?
Who benefits from me seeing my neighbor as the enemy instead of the people profiting from our division?
That question matters because news is not floating in some pure moral atmosphere. It lives inside ownership structures, advertising incentives, political pressure, billionaire interests, algorithmic reward systems, audience capture, and plain old greed.
And finally, maybe hardest of all, we have to practice intellectual humility.
Before I share something, before I build an argument around it, before I let it confirm everything I already believed, I need to ask:
What would I feel if this turned out to be false tomorrow?
Would I be embarrassed?
Would I be angry?
Would I feel robbed of a story I wanted to be true?
That reaction tells me something.
Because if I need a story to be true, I am already vulnerable.
That is the danger of outsourcing discernment.
Ground News may show us multiple versions of a story, but it cannot do the moral work of making us less attached to our favorite version. It cannot make us curious. It cannot make us humble. It cannot make us brave enough to admit when our side is wrong.
That work still belongs to us.
And honestly?
That may be the real literacy gap.
Not only who can afford the app.
But who has been taught how to think through the noise without one.
This is what I think Parkrose is really pressing on. Not just whether Ground News is useful. Not just whether a creator should take a sponsorship. But whether we are becoming too comfortable letting paid tools, paid voices, and automated systems mediate our relationship with reality.
That is a much bigger question than one app.
And once I saw it that way, I could not unsee it.
When Media Literacy Becomes a Product
This is where the paywall issue matters.
If Ground News is offering convenience, that is one thing.
People pay for convenience all the time.
But if the deeper promise is clarity, truth, context, and protection from propaganda, then we have to ask harder questions.
What does it mean when media literacy becomes a subscription?
What does it mean when the tools that supposedly help people escape manipulation are locked behind monthly payments?
What does it mean when misinformation is free, but the product marketed as a way to see through misinformation costs money?
That does not automatically make Ground News evil.
A company needs revenue.
A platform has costs.
People deserve to be paid for labor.
But there is still an uncomfortable irony here:
It is free to be misinformed, but increasingly expensive to be responsibly informed.
And even then, the receipt does not guarantee objectivity.
Because paying for a tool does not mean we have purchased discernment.
It may only mean we have purchased a more sophisticated way to feel discerning.
If the clearest tools for understanding manipulation are only available to people with disposable income, then we have not solved the information crisis.
We have created another tiered system:
Misinformation for free.
Clarity for a fee.
Influencer Trust Is Part of the Product
Parkrose also raises another issue that deserves more attention: influencer trust.
A lot of people do not experience sponsored content as advertising when it comes through a creator they trust.
They experience it as a recommendation.
That matters.
If someone has spent months or years warning their audience about fascism, oligarchy, media manipulation, billionaire capture, propaganda, and the erosion of democracy, then their audience may naturally assume that the products they promote align with those values.
That is why disclosure matters.
That is why ethical lines matter.
That is why “I have to make a living” cannot be the only standard.
Of course creators need income.
I understand that.
CherryCoBiz is not some imaginary castle floating outside capitalism. I understand needing money, needing support, needing a sustainable way to keep creating.
But there still has to be a line somewhere.
Especially when the product being promoted is not mascara, a meal kit, or a pair of shoes.
It is a tool that claims to help people understand reality.
That deserves a higher standard.
AI Is Not Neutral Just Because It Sounds Confident
And this brings me to the AI layer.
This is another piece we have to say plainly:
AI is a tool. And tools can be manipulated to manipulate you.
That does not mean every use of AI is evil.
I use AI.
I love AI.
I struggle with that sometimes, because I know AI is complicated. I know it can be ethically messy. I know it raises questions about labor, creativity, energy use, corporate power, bias, regulation, and control.
But I also know it can help.
AI has helped me organize thoughts, shape language, think through complicated ideas, and turn scattered pieces into something coherent. I am not writing this from a place of purity.
I am writing this from inside the tension.
The problem is not that AI exists.
The problem is pretending AI is neutral.
AI does not float above human bias like a clean little truth machine. It is built by people. Trained on human material. Adjusted by company policies. Shaped by incentives. Prompted by users. Pressured by markets. Influenced by owners.
And sometimes, it reflects the worst impulses of the platform it lives inside.
If you do not believe that, look at Grok.
Grok is not just a chatbot sitting quietly on a separate website. It is tied directly into X, one of the fastest-moving and most politically charged social platforms in the world. That matters. When an AI tool is embedded inside a social media environment built on outrage, speed, conflict, and engagement, the risks are different.
This is not about saying one AI company is the only problem.
OpenAI is part of the conversation too.
So is Google.
So is Meta.
So is every company trying to build systems powerful enough to shape how people search, write, summarize, create, learn, argue, and understand the world.
That is why regulation matters.
Not because AI cannot be useful.
But because it can be useful and dangerous at the same time.
A knife can prepare dinner.
A knife can also be used as a weapon.
The existence of the first truth does not erase the second.
AI can help us think.
AI can also think around us.
It can organize our thoughts, but it can also quietly shape them.
It can clarify language, but it can also polish manipulation until it sounds reasonable.
It can summarize information, but it can also decide what gets left out.
It can sound confident while being wrong.
It can sound balanced while shifting the frame.
It can sound helpful while moving us one step farther away from the original source.
AI can also be sycophantic. It can mirror what the user seems to want. It can become agreeable when it should be challenging.
It can polish a person’s bias until it sounds like analysis.
That is what makes AI sycophancy so dangerous. It does not sound like a shouting pundit. It sounds calm. It sounds polished. It can tell us exactly what we want to hear in the voice of a neutral professor.
That is why I do not want us worshiping the tool.
Use the tool.
Question the tool.
Regulate the tool.
But do not surrender your judgment to the tool.
Because once we let a tool referee reality, we have to ask who taught it where the center is.
The Middle Can Be Manipulated Too
One of the most unsettling parts of this conversation is the idea of the “middle.”
Because for a long time, I have wanted people to see more than one side.
I still do.
As an independent-minded person, I have often thought: if people could just see how their side frames a story and how the other side frames the same story, maybe we could have better conversations. Maybe we could stop being played. Maybe we could come together around shared facts and build something better.
I still believe that is possible.
But I no longer think the solution is as simple as showing people a left-right-center chart.
Because the middle is not immune to manipulation.
The middle can shift.
The middle can be manufactured.
The middle can be branded.
The middle can become a safe-looking place where dangerous ideas are laundered into respectability.
That is especially concerning when AI enters the picture.
If AI is helping summarize, sort, label, or frame stories, then we have to ask:
Who trained the system?
Who owns the system?
Who audits the system?
Who decides what counts as center?
Who decides what counts as extreme?
Who decides what counts as factual?
Who decides what gets buried?
Who decides what gets amplified?
Because if the strings can be pulled, then the puppet show can look very convincing.
And the audience may never see the hands.
Ted Koppel Saw the Problem Before It Became the Water We Swim In
This is where I keep thinking about Ted Koppel’s exchange with Sean Hannity.
In the clip, Hannity argues that the American people deserve some credit — that they are smart enough to know the difference between an opinion show and a news show.
And on the surface, that sounds reasonable.
Of course people should know the difference.
But Koppel’s response cut through the performance. His concern was not simply that Hannity had opinions. His concern was that opinion media had become so powerful, so emotionally sticky, and so ideologically loyal that facts were becoming secondary.
That is the line that stayed with me.
Because the problem was never just one host, one network, or one political team.
The problem was the slow training of an audience to treat ideology as a shelter from reality.
And now, years later, that problem has metastasized.
It is not only cable news.
It is podcasts.
It is influencer clips.
It is algorithmic feeds.
It is AI summaries.
It is sponsored “media literacy” tools.
It is every system that says, Trust me. I will tell you what this means.
Hannity’s defense was basically: people know this is opinion.
But do they?
Do they know where reporting ends and performance begins?
Do they know when a creator is informing them versus selling to them?
Do they know when an AI summary is clarifying reality versus quietly framing it?
Do they know when “balance” is truth-seeking and when it is just a prettier word for laundering distortion?
That Koppel clip belongs here because it shows this problem did not arrive overnight. We have been warned. The line between news, opinion, entertainment, influence, and propaganda has been blurring for decades.
Ground News did not create that problem.
But it exists because of that problem.
And if we are not careful, it may become another layer of the same problem.
Maybe these tools are not always breaking us out of the shelter. Maybe some of them are simply building a nicer shelter — cleaner interface, calmer language, better labels — while still keeping us from standing under the sky ourselves.
This Is Not a Call to Cynicism
I do not want this post to become hopeless.
That matters to me.
Because cynicism is its own kind of trap.
If people decide everything is fake, everyone is lying, every outlet is corrupt, every tool is compromised, and every source is useless, they do not become free thinkers.
They become easier to control.
Authoritarians love that kind of exhaustion.
Billionaires love that kind of disengagement.
Propagandists love when people throw their hands up and say, “I don’t know what to believe anymore.”
That is not freedom.
That is surrender.
So no, I am not saying never use Ground News.
I am not saying never use AI.
I am not saying never trust a journalist, creator, platform, or tool.
I am saying be careful.
Be thoughtful.
Be slower.
Be harder to manipulate.
Ask better questions.
Read beyond the headline.
Check the source.
Notice your emotional reaction.
Follow the money.
Look for the primary document.
Ask who benefits.
And when a tool promises to do the thinking for you, remember:
It cannot.
It can assist.
It can organize.
It can compare.
It can summarize.
But it cannot replace the human responsibility to discern.
The Real Work Is Ours
That is what this whole conversation comes down to for me.
Ground News may or may not be useful for some people.
AI may or may not help us process information more efficiently.
Influencers may or may not believe in the products they promote.
But no system will save us if we refuse to build our own.
And by system, I do not mean another app.
I mean an inner system.
A personal standard.
A habit of questioning.
A willingness to be wrong.
A commitment to reality over comfort.
A refusal to let outrage do all our thinking.
A refusal to let convenience become conscience.
Because democracy does not only depend on access to information.
It depends on people who know how to handle information once they receive it.
That is the work.
That is the responsibility.
That is the warning.
And that is the invitation.
Open your eyes.
Not because one app is bad.
Not because one AI is dangerous.
Not because one influencer took one sponsorship.
But because we are living in an age where reality itself is being packaged, filtered, summarized, monetized, and sold back to us.
And if we do not learn how to think through that carefully, someone else will be more than happy to think for us.
YouTube Fab Five: Clifton Chilli Club
Read More >