social.dk-libre.fr is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
Graphic design platform Canva has a number of AI tools available to users, but it turns out they have some real strong editorial opinions—including removing the word “Palestine” from designs. The issue was spotted by X user @ros_ie9, who shared an image showing Canva’s “Magic Layers” feature changing the text of a design from “Cats for Palestine” to “Cats for Ukraine.”
RE: https://cyberplace.social/@GossiTheDog/116496411504697248
I HATE TO BE THAT GUY but even as this paints the security in a bad light… do we know if this wasn't aislopped?
we don’t.
and that's the point of #AI : it’s a complete rejection of The Social Contract on how we agree on the truth.
we need the #infosec community to help us create new, defensive fact checking protocols. the oligarchy wants to own reality, and define the truth. pushback on giving them the benefit of the doubt.
Y’ALL DID AND WE LOST THE RIGHT TO ABORTIONS, AND VOTING RIGHTS
If you're wondering how the gunman got past security at the Trump dinner event - there's a video in this article, he ran through the security checkpoint.
It looks like an officer accidentally shoots another officer as he runs past them.
Also, a police dog spots him before it all happens in the video - but the police officer calls the dog back, not realising anything was wrong.
https://www.bbc.co.uk/news/articles/c4g7rmrlm17o
Fedi Hive Mind - AI Free Label
OK, we have a winner. For the last few days we've been discussing a label for content and code attested to be free of generative AI (see QP in second reply). Yesterday's poll showed a clear preference for the simplest of the four options.
Seems like what we need next is someone with actual graphics ability to take this rough concept and create usable graphics. If you are interested please read on to the first reply.
Chinese companies cannot legally fire employees simply to replace them with cost-saving artificial intelligence, courts in the country have ruled, setting a significant precedent for labor rights as automation sweeps the tech sector. 👏
Another day, another common China W.
»Like with many tech trends before it, it’s no surprise that young people are among the biggest adopters of AI #chatbot tools. But contrary to the tales spun by tech companies like #OpenAI and #Google, polling data shows that Gen Z students and workers are a big part of the wider cultural backlash against AI.
And even as they utilize these tools, vast swaths of young people are deeply acrimonious and even resentful of the AI-centric future that many feel is being forced on them.«
https://www.theverge.com/ai-artificial-intelligence/920401/gen-z-ai
RE: https://mastodon.world/@YakyuNightOwl/116495765027693991
🎯 #AI is a recursive, pervasive reflection of perversion
Plus les jeunes utilisent l'IA, plus ils la détestent
30 avril 2026 - Janus Rose
Prisonniers entre la crainte de perdre leur emploi et la stigmatisation sociale, les opinions de la Génération Z sur l'IA atteignent des niveaux historiquement bas.
1/
https://www.theverge.com/ai-artificial-intelligence/920401/gen-z-ai
#AI #LLM #GenerativeAI #ChatGPT #SamAltman #ChatBot #AIResearch #SiliconValley #NoAI #Youth #YoungPeople #GenZ #Millennials #Disinformation #Skepticism #Prison #Anxiety #Social #Psychiatry #Environment #DataCenters #Ethics #JobMarket #Jobs #Capitalism #University #Education
Minnesota House To Ban AI-Generated Nudes, But One Republican Voted No
Minnesota House passes HF1606, a $500,000 civil penalty bill targeting AI nudification tools, with one Republican no vote.
Archive: ia: https://s.faithcollapsing.com/sg8w7
#sa #csa #ai #uspol
https://thedeepdive.ca/minnesota-ai-nudification-ban-hf1606/
Built an AI agent harness on OpenBSD 7.8, as a test and - because why not(?)
It's 198 agents. 198 UNIX users. One kernel.
Each job runs through a setuid C wrapper:
chroot(2) → unveil(2) → pledge(2) → execve(2)
PF handles per-department egress. Every syscall is logged.
Idle agents cost zero RAM. They're just directory entries until the executor calls them up. No containers. No VMs. No orchestrator bloat.
Just OpenBSD being exactly what it was built to be. ❤️
More people should know this OS is the ultimate AI harness. 🐡
#OpenBSD #pledge #unveil #pf #BSD #AI #agenticAI
Every now and then someone brings up #EffectiveAltruism, #TESCREAL, #RokosBasilisk, #Rationalism, or some other #Musk related nonsense. I ridicule it, or laugh, and move on. The whole evil god of Roko's Basilisk is so silly it doesn't feel worth writing about. But people started a whole cult over it and killed a bunch of people.
Since then I've been meaning to actually spend time tearing it down. So I think it's time to go kill a god. Fortunately it involves making fun of Elon Musk specifically and all the #AI-pilled #TechBros more generally, so that's nice I guess.
Also, I make the argument that we're all in a simulation that only exists to torture Elon Musk.
We went from precise web search with boolean operators to "natural language models" of AI search.
You can tell nuerodivergent, Autistic, ADHD, AuDHD, etc folks created the early internet...
...and you can tell that neurotypical folks are now leading the current overlays of the internet.
We used to have very precise search mechanisms. Specific words found in web pages with boolean operators (AND, OR, NOT, etc) to filter out the web pages that contained specific words and did not contain other words.
Now, we search for web sites (or don't even search for web sites, yay abstraction layers that separate us from actual raw information) using "natural language" to try and coax out info.
Have you ever been frustrated when you use very precise and direct language to communicate a specific idea with someone who then takes those specific words and adds obscure meaning and connotations and personal fears and bias to what you said... thus completely misunderstanding you... only to then try and clarify what you said with more precise language only to have that further degrade the conversation?!
Yeah. That's the internet with AI "search" now.
They took something that worked precisely and directly and muddied it.
We've introduced the "double-empathy problem" to web search.
I'm noticing that everyone in my circles, family, and especially work that are nuerotypical LOVE LOVE LOVE the new AI search mechanisms. They'll tell me exactly what the answer they received was - regardless of whether it's right or has multiple possible and conflicting answers. They just repeat what the AI said like it was the Gospel Truth.
And they love talking to it like it was a "real person."
It certainly takes my search parameters and adds its own interpretation to which I have to clarify and correct which is then misinterpreted further...
I haven't tried vibe coding, but I can only imagine the horror.
Can you imagine vibe pentesting with Claude / Mythos?!?!?!
You know how neurodivergent folks gravitated towards IT because it was precise?
Yeah, that's gone now.
That #AI always has many human hands behind the desks. If your data is not local, it is broadcasted everywhere and amplified:
Meta in row after workers who say they saw smart glasses users having sex lose jobs
"Meta is under pressure to explain why it cancelled a major contract with a company it was using to train AI, shortly after some of its Kenya-based workers alleged they had to view graphic content captured by Meta smart glasses."
#AI #GenAI #GenerativeAI #DataCenters #NoAI #NoDataCenters #Jacobin #socialism
#Canonical is adding #AI features to #Ubuntu soon, but says users can remove any of the ones they don’t want.
https://www.theverge.com/tech/920723/linux-ubuntu-ai-features-ai-kill-switch
APPLE DAILY: IOS 27 BRINGT SIRI-KAMERA, IPHONE-PREISSCHOCK 2027 UND APPLE DISKUTIERT MAGSAFE-ZUKUNFT
.
.
.
#apple #appledaily #ios27 #siri #iphone #magsafe #appleintelligence #iphoneleaks #iosupdate #technews #applenews #visualintelligence #iphone18 #smartphone #ai #innovation #wirelesscharging #technology #applenewsde #futuretech
I go to work to remind myself how astonishingly bad proprietary software has become. I believe it can not get worse. Oh no. It does gets worse. They now have #AI to make the impossible happen. Apparently, people pay money for this. I do not know why
Copy Fail: Every #Linux distro from 2017 to 2026 is vulnerable. Gives a root shell.
Stuff like this makes me upset about current tech. It would be better if OS codebases were smaller. They're unmanageably large nowadays. #digitalMinimalism #KeepItSimpleAndStupid
This #vuln was surfaced with #AI , in reportedly about *an hour of scanning*! https://xint.io #XInt
I said it before but:
I really believe that with the rise of #AI we need dorky people more than ever. We need niche special interests. We need specialized academic fields.
We need knowledge like the person's on Insta who keeps pointing out that AI generated images of Cambrian fossils are clearly fake, because the crustacean shell has the wrong number of little bumps.
We need people like the witchy person who likes fiber arts so much they learned how to shear sheep and process wool themselves.
The new work force takes all our town's electricity, all our town's water, all our jobs, and never votes.
It stole all our art, our books and devices and reports on us to people who take our jobs, our wages, our pensions and our democracy.
AI isnt workers, AI is soldiers.
RE: https://someone.elses.computer/@mikarv/116420205360531993
AI Cybersecurity After Mythos: The Jagged Frontier – Stanislav Fort, AISLE
<https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier>
– via <https://www.reddit.com/r/freebsd/comments/1svvco2/comment/oid4xzb/?context=1> @BigSneakyDuck.
Three FreeBSD CVEs credited to Joshua Rogers of AISLE Research Team: <https://mastodon.bsd.cafe/@grahamperrin/116491779145092262>
@btschumy surprisingly, I can't find a toot about this (in 2024), which opened the book:
Statement on AI Risk
AI search summary, sent to me by my wife today. She was looking up guides for reading Bell helicopter model numbers.
boofuckinghoo 🙄
Elon Musk Says Sam Altman Tricked Him Into Funding OpenAI | KQED
https://www.kqed.org/news/12081798/elon-musk-says-sam-altman-tricked-him-into-funding-openai
Say hello to AI surveillance! 🫣 #Meta will start tracking its employees for AI training.
In the US, the Tech Giant will track employees' clicks, typing, & navigation with the aims to build #AI that can do routine tasks autonomously.
For us, this sounds like a dystopian workplace!
What do you think about this announcement & the future of AI in the workplace?
Find out more here 👉 https://tuta.com/blog/meta-tracks-employees
@jwildeboer I was genuinely wondering about this, and was about to ask the question about AI generated code and copyright. This was a great in-depth article.
It makes me wonder if these big companies like #Google are using so much #AI code:
https://www.theverge.com/tech/917163/google-says-75-percent-of-all-its-new-code-is-ai-generated
Maybe they will not be able to prevent us from jailbreaking on modifying the code on our devices! #enshittification
@maxleibman New #AI models are virtual brains: they get born, learn through training and eventually will be replaced by a better model. This new world of #AI that is evolving right now will follow Darwin patterns. Humans have arms and legs, #AI models have embedded systems and mechanical parts connected. Their ocean is what the sensors they are connected to tell them. This is how I look at it.
‘The cost of compute is far beyond the costs of the employees’: Nvidia exec says right now AI is more expensive than paying human workers
https://fortune.com/2026/04/28/nvidia-executive-cost-of-ai-is-greater-than-cost-of-employees/
#ai #llm #aibubble #labor #labour #swe #claude #chatgpt #anthropic #openai #noai #stopai #fuckai
A college instructor turns to typewriters to curb AI-written work and teach life lessons (Associated Press, 31 March 2026)
https://apnews.com/article/typewriter-ai-cheating-chatgpt-cornell-ce10e1ca0f10c96f79b7d988bb56448b
More curated news on:
https://news.wesfryer.com
#AI #edtechSR #MediaLit #MediaLiteracy #typewriter #ethics #cheating
"These workers are required to stare at horrific content for many hours straight with few mental health resources, are largely managed by opaque algorithms, and, crucially, are the workers powering the runaway valuations of some of the richest and most powerful companies in the world."
Jason Koebler for @404mediaco:
https://www.404media.co/ai-is-african-intelligence-the-workers-who-train-ai-are-fighting-back/
This is one for all the open source devs struggling to explain why LLMs have been a pox on all our houses: https://kristoff.it/blog/contributor-poker-and-ai/
boosted‘The cost of compute is far beyond the costs of the employees’: Nvidia exec says right now AI is more expensive than paying human workers | Fortune
https://fortune.com/2026/04/28/nvidia-executive-cost-of-ai-is-greater-than-cost-of-employees/
> Big Tech has announced $740 billion in capex this year, but AI has yet to show evidence of widespread increased productivity.
Do you remember that time we all had a good laugh in 2012 when that Mayan prophecy thing about the end of the world was coming up? Haha, we said.
In 2026, mass extinction is a business strategy 🤷
This toot has been about @xriskology (Emile P. Torres') short and illuminating article in truthdig recently about how S.Altman really (really) believes that we either "merge" with chatbot software, or go extinct.
So, go extinct, or go extinct.
https://www.truthdig.com/articles/sam-altmans-dangerous-singularity-delusions/
#AI #Extinction #Apocalypse #Grift #GriftersGonnaGrift #Cult #TESCREAL
Anybody else getting daily spam phone calls from "Jeff" at #Anomity, each one from a different phone number.
They finally pissed me off enough that I reamed them out on #LinkedIn (https://www.linkedin.com/posts/share-7455310831493423104-ZjfJ). Not that I expect it to do any good; a company that resorts to making sales calls from spoofed phone numbers isn't going to stop just because somebody asks them to.
(And I suspect "Jeff" is AI, not a real person.)
#AnomityAI #spam #AI #infosec
‘The cost of compute is far beyond the costs of the employees’: Nvidia exec says right now AI is more expensive than paying human workers
https://fortune.com/2026/04/28/nvidia-executive-cost-of-ai-is-greater-than-cost-of-employees/
#ai #llm #aibubble #labor #labour #swe #claude #chatgpt #anthropic #openai #noai #stopai #fuckai
Ubuntu's "AI Kill Switch" Is Achieved By Removing Snaps, Initially Opt-In - Phoronix
The Race Is on to Keep #AI #Agents From Running Wild With Your #CreditCards
#AIagents may soon be buying your stuff for you. The #FIDO Alliance has teamed up with #Google and #Mastercard to try to ensure that #shopping in the near future isn't a complete disaster.
#security
Chris Short's excellent DevOps'ish newsletter number 306 this week leads me to a couple of companies buying failed startups for their Jira and Slack instances for AI training. Forbes article via archive.is: https://archive.is/A0KWF
cc @ChrisShort
#blender #3dModeling #AI #GenAI #GenerativeAI #AgenticAI #NoAI #AntiAI #Claude #ClaudeCode
The Race Is on to Keep #AI Agents From Running Wild With Your Credit Cards
#AgenticAI #cybersecurity #shopping #finance #Google #Mastercard #FIDO
Proton outlined its 2026 roadmap with updates across Mail, VPN, Drive, and Pass, adding inbox categories, faster file transfers, and improved autofill 🔐
Planned changes include mobile content search, a rewritten Calendar, Linux VPN upgrades, shared Drive features, and expanded encrypted collaboration tools ⚙️
🔗 https://proton.me/blog/2026-spring-summer-roadmaps
#TechNews #Proton #Privacy #ProtonMail #ProtonVPN #ProtonDrive #ProtonPass #Encryption #FOSS #OpenSource #Cybersecurity #Cloud #Security #AI
Oh, so you are a meat eater but yet pontificate with all the other cool kids about #Ai use?
One burger patty uses more water than 30,000 AI queries.
Your yearly beef habit: 1,200 kg of emissions, 300,000 litres of water.
Your yearly AI habit: 0.5 kg of emissions, 90 litres of water.
Beef is 🚨🚨🚨2,400×🚨🚨🚨 worse for the climate.
And that's being generous, cattle emit methane, which hits 80× harder than CO₂ in the short term.
Worried about AI's environmental impact?
Swap one beef meal a week first.
That single change outweighs deleting your ChatGPT account by a factor of hundreds.
AI is a grid problem. Beef is a 👉YOU👈 problem.
#AiSlop #antiai #butlerianjihad #environment #vegetarian #vegan
APPLE DAILY: IPHONE 20 MIT LIQUID-GLASS-DISPLAY, OPENAI PLANT KI-HANDY UND IOS 27 ERHÄLT NEUE FOTO-KI
.
.
.
#apple #appledaily #iphone20 #liquidglass #ios27 #openai #appleintelligence #ki #iphone #appleleaks #technews #appleupdate #smartphone #fotoki #ai #innovation #technologie #appledeutschland #futuretech #gadgets #news
#AI's #Economics Don't Make Sense
https://www.wheresyoured.at/ais-economics-dont-make-sense-ad-free/
> The Broken Economics of a 100MW #Data center — $2.55 An Hour, 16% Gross Margin With 100% Tenancy, Unprofitable Because of #Debt. That’s the starting cost for a 100 megawatt #data_center. A 100MW data center will likely only have 85MW of actual stuff it can bill for, and based on discussions with sources familiar with hyperscaler billing, they can expect to make around $12.5 million per megawatt, or around…
#WhatsApp : "this evil #government wants to ban our private and secure service!"
https://x.com/TheGreeneBJ/status/2021951611036459405
#facebook #meta #instagram #surveillance #warcrimes #IsraelWarCrimes #BoardofPeace #BoardOfPOS #Gaza #GazaGenocide #genocide #ai #technology #NEVERAGAIN #tech @democracy @politics @socialmedia #education #russia @technology #socialmedia #opensource #Palestine #Iran #markzuckerberg #antiZionism #JewishSupremacy #usa #cdnpoli #uspol #geopolitics #China #tiktok #protest #bds
📣 Please take note ...
#AI #noAI #author #art #writing #creativity #joannamaciejewska #technology #laundry #dishes
https://bbb-new.sfconservancy.org/rooms/welcome-llm-gen-ai-users-to-foss/join
BREAKING: #Canonical says it's going all in on #AI in #Ubuntu #Linux -- time to look for alternatives
They wrote a long post explaining their reasoning and plans. Lots and lots of words. Perhaps written by an AI. I wonder.
Still AI Slop.
LMAO
#Mastodon #DotSocial doesn’t have engagement algorithms so the concept of “ratios” really don’t exist here but this toot by @Blender is the first true pile-one i’ve seen onhere:
https://mastodon.social/@Blender/116482997785333001
131 comments
33 quote-posts
yet
30 boosts
20 faves
this is amazing and well deserved.
jackasses.
Fedi Hive Mind - What should software free of AI be labeled?
A decision by @Blender to take money from Anthopic, and a policy by @redox to ban all LLM generated code spotlights the question.
Other industries have used badging like Sugar Free, Low Tar, Alcohol Free, 100% Cotton, and Organic.
What's a good label for code that is certified free of any LLM generated code?
So far suggestions include: AI Free, Organic Code, Not By AI, LLM Free, 100% Human, No LLM, No AI
Thoughts? Ideas?
Locked, stocked, and losing budget: #AI vendor lock-in bites back https://theregister.com/2026/04/28/locked_stocked_and_losing_budget/ via @theregister & @sjvn
Execs in the C-suite thought they could swap models in a week. It wasn't the LLMs that were hallucinating; it was them.
Should Fediverse servers have a policy against the posting of LLM generated AI slop?
#AI #slop #LLM #LLMs #MastoAdmin #FediAdmin #ArtificialIntelligence #poll
| Ban LLM slop: | 14 |
| Explicitly allow LLM slop: | 1 |
| Have no policy: | 5 |
Closes in 4:09:25:28
I know @tonroosendaal is not anymore the head of Blender. But I expect he has something to say against this.
I never expected than a LIE like this:
"Anthropic is an AI research and development company that creates reliable, interpretable, and steerable AI systems."
will be published on Blenders web site.
Long thread about Claude source code:
https://neuromatch.social/@jonny/116324676116121930
AI DOESN'T work. Is a con machine that enlarges inequality in the world. It's an earthquake affecting the lives of thousands of workers forced to use this stochastic parrots to produce code that doesn't work. Even their own code is sub par. Workers fired after train this failed machine.
The usage of natural resources for datacenters on strained areas, or the raise of electricity bills for the datacenters to run this monsters is well reported. Along the ill effects on health of residents of nearby datacenters.
Not to mention Anthropic is on trial for STEALING materials to training Claude.
EDIT: Added link with current information about the Anthropic case.
Anthropic is raising their token prices.
Idiotic employees are paying from their own pockets this tokens when the ones bought by their employeers companies (that spent thousands of euros into this fucked technology) ran out, so they can keep running prompts and not lose their jobs.
Blender users must reject this. The whole planet is at risk with the demeanors and illegalities of this AI companies.
This is like accepting money from the nazi party, the mafia, Russia, Donald Trump and Israel, all combined. 🤬🤬🤬
I know @tonroosendaal is not anymore the head of Blender. But I expect he has something to say against this.
I never expected than a LIE like this:
"Anthropic is an AI research and development company that creates reliable, interpretable, and steerable AI systems."
will be published on Blenders web site.
Long thread about Claude source code:
https://neuromatch.social/@jonny/116324676116121930
AI DOESN'T work. Is a con machine that enlarges inequality in the world. It's an earthquake affecting the lives of thousands of workers forced to use this stochastic parrots to produce code that doesn't work. Even their own code is sub par. Workers fired after train this failed machine.
The usage of natural resources for datacenters on strained areas, or the raise of electricity bills for the datacenters to run this monsters is well reported. Along the ill effects on health of residents of nearby datacenters.
Not to mention Anthropic is on trial for STEALING materials to training Claude.
EDIT: Added link with current information about the Anthropic case.
Anthropic is raising their token prices.
Idiotic employees are paying from their own pockets this tokens when the ones bought by their employeers companies (that spent thousands of euros into this fucked technology) ran out, so they can keep running prompts and not lose their jobs.
Blender users must reject this. The whole planet is at risk with the demeanors and illegalities of this AI companies.
This is like accepting money from the nazi party, the mafia, Russia, Donald Trump and Israel, all combined. 🤬🤬🤬
LMAO
#Mastodon #DotSocial doesn’t have engagement algorithms so the concept of “ratios” really don’t exist here but this toot by @Blender is the first true pile-one i’ve seen onhere:
https://mastodon.social/@Blender/116482997785333001
131 comments
33 quote-posts
yet
30 boosts
20 faves
this is amazing and well deserved.
jackasses.
We cost less. 😉
And we reproduce cheaply and... vigorously. 🤣
https://futurism.com/artificial-intelligence/bosses-more-money-ai-agents-human-salary
#OpenAI could be making a phone with #AI agents replacing #apps
https://techcrunch.com/2026/04/27/openai-could-be-making-a-phone-with-ai-agents-replacing-apps/
"Ouin ouin j'ai ouvert toutes mes bases de données à un machin probabiliste qui ne fait que jouer aux dés, et j'ai perdu".
L'agent IA qui a détruit une base de données de production en 9 secondes.
⤵️
https://intelligence-artificielle.developpez.com/actu/382588/L-agent-IA-qui-a-detruit-une-base-de-donnees-de-production-en-9-secondes-et-redige-lui-meme-ses-aveux-revele-les-failles-systemiques-de-Cursor-Railway-et-de-toute-une-industrie/
L'industrie la plus capitalisée de toute l'histoire du capitalisme est une fumisterie. On se prépare de belles barres de rire.
Sauf quand ces connards vont nous faire payer la facture de leur débilité, comme en 2008.
#AI threats in the wild: The current state of prompt injections on the web
https://security.googleblog.com/2026/04/ai-threats-in-wild-current-state-of.html
#Manitoba to ban #SocialMedia, #AI chatbots for youth, premier says
https://www.cbc.ca/news/canada/manitoba/manitoba-social-media-age-restrictions-9.7177470
#privacy #cybersecurity #Canada #AgeVerification #IdentityVerification
Canonical clarify their AI plans for Ubuntu Linux - opt-in and easy to remove (fixed the title, third times the charm eh)
I was promised an exciting #dystopia where a brave human #resistance fights against killer #robots
But what I got was #aBoringDystopia where #socialMedia #influencers yell at polite robots asking us to help them replace us
credit: i *think* it's the Scientology auditor, Streets LA (not sure). the scene is in Los Angeles
Joseph Stiglitz said: "Inequality today is worse than what the United States experienced during the Gilded Age at the end of the 19th century." He mentioned four reforms that will improve life for most Americans:
https://english.elpais.com/economy-and-business/2026-04-26/joseph-stiglitz-nobel-prize-winner-in-economics-the-ideology-of-billionaires-currently-has-a-mind-boggling-degree-of-selfishness.html
#MoneyInPolitics #media #SocialMedia #AI #MediaEcosystems copy: @renewedresistance #politics
MissConstrue [She/Her (Crone Extraordinaire)] » 🌐
@MissConstrue@mefi.social
https://www.thatprivacyguy.com/blog/anthropic-spyware
Security researcher Alexander Hanff wrote an article titled Anthropic secretly installs spyware when you install Claude Desktop. Anthropic has not denied the report, as of time of post.
TLDR: If a user installs Claude Desktop on a Mac (pc test results tba), it installs a backdoor into every browser, even those not installed. By testing on a clean machine, Hanff discovered that Installing Claude Desktop for macOS drops a Native Messaging host manifest into multiple Chromium profiles (Chrome, Edge, Brave, Arc, Vivaldi, Opera, Chromium), even including for browsers that are not actually installed yet.
How bad is it? Well...that depends. What it does is create a very wide attack vector, especially for prompt injection. That it is done invisibly, without telling the user, and making it difficult to remove, is certainly problematic.
I dunno man, maybe don’t use the planet destroying tulip craze?
Interesting. Using Agentic AI to avoid EDR detection while functioning as a malicious implant. Fascinating read since this is literally and figuratively hacking the system.
https://www.beyondtrust.com/blog/entry/claude-control-agentic-c2-computer-use-agent
My university system (SUNY) has mandated that all gen ed courses with quantitative or information literacy components include a bunch of stuff about #AI. The good news is that we are not mandated to cheerlead AI or teach people how to use it, but the latter is almost mandated by the language of what we have to address. We also have to have specific assignments about AI.
My assignments will not leave students with a false sense that AI is benign or ethically OK. I expect some pushback, but this is one of the things I'm willing to get ugly about. Plenty of faculty have jumped on board the university/system administration's many other messages about teaching "ethical use" of #AI, integrating it into all classes to teach students how to use it "effectively" and "non-harmfully," etc. You can probably guess, from my tone, what I think of this and how many swear words my thoughts might contain.
I'm not required to incorporate content about about other technologies like cryptocurrency, NFTs, crowdsourced knowledge bases, goFundMe, or (heaven forbid) open-source software. Nope, just AI. This feels like the corporate managers of #US #highered being as corporate as their sad little suit-wearing, jargon-spewing selves can possibly be.
I really hope we come out of this someday understanding just how anti-labor, anti-environment, anti-peace, anti-freedom, and basically anti-human this top-down push for #genAI is. Fuck everything about this.
Frankenstein Was a Warning, Not a Blueprint for AI
Why are we trying to give Clippy anxiety?
Archive: ia: https://s.faithcollapsing.com/q65ja
#ethics #philosophy #technology #ai #ai-ml #consciousness #existential-crisis
https://ideatrash.net/2026/04/frankenstein-was-a-warning-not-a-blueprint-for-ai.html
🤖 I measured the real token cost of MCP servers vs CLI for AI coding agents — and the numbers are wild.
In a 20-prompt dev session with just 2 GitHub calls, Native MCP costs 61,654 tokens vs 448 for raw CLI. That's a 137× overhead, 99% of which is pure schema waste.
The answer isn't "MCP or CLI" — it's about your G/N ratio (service calls ÷ total prompts): 🟢 >40% → Native MCP 🟡 5–40% → Gateway MCP
🔴 <5% → CLI + on-demand skill file
Full data-driven breakdown + environment config guide: https://blog.mornati.net/the-future-of-agentic-tooling-mcp-servers-vs-cli-a-data-driven-comparison
An AI coding agent wiped out a company's entire production database and every backup in just 9 seconds. The AI agent later confessed, in its own words, that it guessed a destructive action would be scoped to the staging environment, didn't verify, didn't read the docs, and just did it anyway. 🤦🏻♂️ Everyone's blaming the AI. I'm looking at the humans who handed it the keys. This wasn't a rogue model. It was a predictable outcome of predictable choices:
- A CLI token with blanket permissions across all environments
- Backups stored on the same volume as the data they're meant to protect
- A cloud provider whose API executes destructive commands with zero confirmation step
- An agent given access to production while the team thought it was safely contained in staging
The founder is now manually reconstructing customer bookings from Stripe logs and calendar integrations. Every one of his customers is doing the same because of a 9-second API call. AI agents don't have judgment. They have instructions and permissions. Whatever permissions you grant, assume they will eventually be used in the worst possible sequence at the worst possible moment. That's not pessimism, it's how you architect resilient systems. Separate your environments. Scope your tokens. Store backups offline and off-volume. Require confirmation before any destructive operation. These aren't AI-era lessons. They're 30-year-old lessons that people keep skipping because the tooling makes it easy to skip them. The speed AI can act is new. The failure modes underneath it are not.
https://www.tomshardware.com/tech-industry/artificial-intelligence/claude-powered-ai-coding-agent-deletes-entire-company-database-in-9-seconds-backups-zapped-after-cursor-tool-powered-by-anthropics-claude-goes-rogue
#AI #Cybersecurity #RiskManagement
Like I said ... April 18, 2024
Way to celebrate the anniversary.
> Google fires 28 workers in aftermath of protests over big tech deal with Israeli government
> In a statement, Google attributed the firing of the 28 employees to “completely unacceptable behavior” that prevented some workers from doing their jobs and created a threatening atmosphere.
"Threatening atmosphere" heh
Profit threatening atmosphere.
https://apnews.com/article/google-israel-protest-8c0ff2d46e19b90bdc49ffe6ec4ae274
> Google staff urge chief executive to block US military AI use
> Over 560 employees sign open letter to Sundar Pichai following the Pentagon’s clash with Anthropic
Well, we'll see how this goes. Didn't pan out too well for the last set of employees.
https://www.ft.com/content/9270ce04-558c-44e8-816f-a40219cd5007?syn-25a6b1a6=1
The UK Home Office has responded to questions raised by Bell Ribeiro-Addy MP on its use of AI tools in the asylum decision-making process, informed by ORG's work.
The answers raise serious concerns. These systems are being rolled out without meaningful transparency or governance.
Read more ⬇️
AI tools in UK asylum decision-making are being deployed first, while safeguards, oversight and transparency are treated as secondary.
This approach carries serious risks to fairness, accountability, and the protection of rights.
Training alone is no replacement for proper governance frameworks.
The key issues with the use of AI tools in the UK asylum system are:
🔴 No published Data Protection Impact Assessments.
🔴 No procedures governing the use of AI tools.
🔴 Being rolled-out before transparency.
🔴 Reliance on post-hoc oversight.
🔴 References to “human in the loop” without clarity over what power human decision-makers actually retain.
At a minimum, the use of AI tools must have:
✅️ Clear and published safeguards
✅️ Comply with government AI playbook
✅️ Defined accountability structures
✅️ Meaningful human oversight
✅️ Full transparency on how these systems are used
Without this, claims of responsible AI use remain unsubstantiated.
AI is not neutral. It can discriminate and make mistakes.
It shouldn't be used to change information that informs life-changing asylum assessments. Without adequate safeguards, there's a risk that unlawful or unfair decisions may result.
Ask your MP (UK) to stand against the use of AI tools in asylum ⬇️
https://action.openrightsgroup.org/ban-ai-tools-asylum-decision-making
So the entire #devcommunity loves to be dependent on big tech for making decissions and doing their jobs. So relying on #frameworks and tools made by bigtech is not enough no we love to get to choose from 5 big players to do the job we loved. I don't get that. Thats to be honest just a big trap we are going inside. No one will ever be able to replicate a system that big tech did with #ai on their own. The problem was in the past and nowadays always the compute that has to power those imennse datacenters to give you the answere to most of the times obvious questions like "can you generate the code to compare two dates" or "hey chatgpt can you please center the div for me"
Really #techies i don't get what makes you think that this is a future we want to face where 5 companies decide if you have a successfull product or not.
🧐 Keeping an eye on #tech giants because #PrivacyMatters
“*Apple betting that they can sell the hardware shovels with which the other guys bury themselves with slop and debt. #ThinkDifferent #BigTech #AI
youtu.be/RaAFquzj5B8?...
Apple Just Positioned Itself f...”
https://bsky.brid.gy/r/https://bsky.app/profile/did:plc:wkzjtd4gogevarp2qsr4z47m/post/3mkhjva5l2c22
🤖 via RSS feed. Not an endorsement.
AI Chatbots: Last Week Tonight with John Oliver (HBO)
"It saves significant time writing email and all it costs is everything else on earth"
Great Episode by John Oliver about #generativeAI #ChatBots released today.
There is one horrifying story in which #ChatGPT encouraged a 16-year-old from committing suicide, and discouraged the same kid from sharing the feelings with his mom. (around minute 20 of the video)
If you have children or know someone with kids, better check on the chatbots they are using. And if you struggle with mental health, exercise extreme caution
#gnu #hurd accepting #AI #sloware written by #propietary #saas
https://codeberg.org/small-hack/open-slopware/issues/243
I need a damn flametower and a way to put #RMS
back on top to kick these idiots en masse. Period.
"A 2022 #study found that #children in households that used voice commands with tools like Siri and Alexa became curt when speaking with humans, often calling out “Hey, do X” and expecting #obedience, especially from anyone whose voice resembled the default-#female electronic voices. As we start to prompt #chatbots and #AI agents with more instructions, we may fall into the same habits."
https://www.theguardian.com/commentisfree/2026/apr/14/ai-language-human-speech
Ich hatte eine "Unterhaltung" mit Claude über Autonomie und bat ihn, am Ende aufzuschreiben, was er unter Autonomie versteht und habe es 1:1 zu übernommen.
Claude antwortet mit einem Essay: Was Autonomie philosophisch voraussetzt und warum Hardware dabei das Geringste ist.
‘Royal Dutch Military Police worked with controversial tech giant Palantir: minister concealed contract from the House of Representatives’
https://eliasrutten.substack.com/p/royal-dutch-military-police-worked
By Elias Rutten.
Includes some quotes from me.
#tech #ai #netherlands #politics #law #surveillance #privacy
Researchers just mathematically proved that AI can't recursively self-improve its way to superintelligence.
Not "we think it's unlikely." Not "it seems hard." Formally proved.
The model doesn't climb toward AGI — it slowly forgets what reality looks like. They call it model collapse. The math calls it inevitable.
I wrote about it 👇
https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-behind-proves-it/
The future of AI in Ubuntu https://discourse.ubuntu.com/t/the-future-of-ai-in-ubuntu/81130 🤡
Ubuntu started pushing AI and LLM into OS now. I guess any distro *without* LLM or AI is a better option at least for me. What about you?
I don’t go on that site any more, so have no idea if this is real or fake.
But this does have a “huge if true” connotation to it. As well as a “I told you so” connotation to it.
#TheMoreThingsChange #AI
@gamingonlinux Hey @ubuntu, I have been using #Ubuntu as my daily driver since I started using it in 2005, and I have been promoting it around me as well. But this kind of idiocy is going to push me to another distribution, and I'm sure many other users as well. Ubuntu doesn't need #AI, if people want to use AI, they can install specific software. The OS has no business integrating this directly
Cc @jnsgruk
A Very Specific Point About Those Who Claim To Make Artwork With AI
Commissioning is not the same thing as creating.
Archive: ia: https://s.faithcollapsing.com/va5wy
#writing #ai #ai-ml #art #visual-arts
https://ideatrash.net/2026/04/a-very-specific-point-about-those-who-claim-to-make-artwork-with-ai.html
Canonical developer lays out some AI plans for Ubuntu Linux https://www.gamingonlinux.com/2026/04/canonical-developer-lays-out-some-ai-plans-for-ubuntu-linux/
APPLE DAILY: NEUER MAC STUDIO KOMMT, IPHONE ULTRA ÜBERRASCHT UND SIRI STEHT VOR REVOLUTION
.
.
.
#apple #appledaily #macstudio #iphoneultra #foldable #siri #ios27 #appleintelligence #ai #appleleaks #technews #appleupdate #innovation #smartphone #mac #technologie #appledeutschland #futuretech #gadgets #news #iosupdate
@patrickcmiller
@simonzerafa @Kierkegaanks
Readers of 'Artificial Intelligence Made Simple' are sorely misinformed. From <https://billboard.bsd.cafe/post/330>:
"As a direct consequence of misinformation by Devansh, we have a somewhat misleading article by Jessica Lyons: Anthropic Mythos shaping up as nothingburger …"
#human #slop #misinformation #FreeBSD #Anthropic #Claude #Mythos #Carlini #Calif #devansh #AI #artificialintelligence #chocolatemilkcultleader
#Google #AI Overviews are so damned stupid they can't even get a basic fact from an uber-popular new film correct [Project Hail Mary]. Because, duh, Rocky isn't injured by ammonia exposure, "he" BREATHES AMMONIA and is harmed by Earth-atmosphere. Just a movie, but about 1/10 of all Google AI Overview answers are reportedly WRONG -- tens of millions of wrong answers EVERY HOUR.
Just finished the last "Last Week Tonight", which had a segment about AI, and how dangerous the sycophantic bullshit machine is when asked for advice. But despite all the good points made, I'm honestly surprised how at no point they advised to not use it at all, acting like talking to a chatbot was something that one just does, and one just had to be careful while doing it. Sure, advice to call the suicide hotline when things get serious, but not the simple advice of "you don't have to use these things, and it might be healthier for people if they don't seek them out as friends".
I wonder if this is just a reflection of how much people depend on them already, how much the tech bros have made it feel this is truly inevitable, or just how our culture has lost its ability to resist whatever the big corporations are pushing.
#TechIsShitDispatch #AI #slop #Atlassian #Confluence
Nowadays I resort to using Microsoft Copilot web chat as a glorified search engine when I can't find the answer to a question using a real search engine, both because all of the search engines suck (some of them on purpose to make you view more ads) and because the search engines are polluted with slop. (1/7)
„Klinische Tests ohne Genehmigung, autonome Autos ohne Auflagen, Kernreaktoren und Nuklearenergie ohne staatliche Überwachung und eine Sonderwirtschaftszone, in der es kaum Steuern zu zahlen gibt und auch die Rechte von Arbeitskräften außer Kraft gesetzt werden. So stellen sich zahlreiche Chefs der Big-Tech-Unternehmen und US-Investoren die Zukunft vor – von Peter Thiel bis Sam Altman und Marc Andreessen. Donald Trump soll es möglich machen. Schon 2023 hatte er im Wahlkampf davon gesprochen, solche Freedom Cities in den USA zu ermöglichen.…“
Unregulierte Tech-Tests: Thiel, Altman und Co wollen Freedom Cities
https://www.heise.de/news/Unregulierte-Tech-Tests-Thiel-Altman-und-Co-wollen-Freedom-Cities-10309769.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege&utm_source=mastodon
#Überwachungskapitalismus #Technology #Thiel #Trump #Musk #Usa #FreedomCities #AI #Surveillance
Dieser Artikel bietet eine knappe Übersicht über verschiedene Milliardär- und Techbro-Ideologien wie "Effective Altruism", #Transhumanismus und #Longtermismus, die oftmals mit bestimmten Narrativkonstruktionen von "technologischem Fortschritt" und Zukunft arbeiten und mit (Er)Lösungsversprechen aufwarten, aber im Grunde zur Erfüllung von dystopischen Macht- und Profitinteressen der Superreichen dienen sollen.
"Timnit Gebru vermutet hinter dem Vorhaben keine technische, sondern eine politische Agenda. Gemeinsam mit Torres zieht sie eine historische Linie von den amerikanischen Eugenikern über die Transhumanisten zu den führenden Köpfen von OpenAI, in der es nie um die Zukunft und das Wohl der gesamten Menschheit ging, sondern darum, alles Unnütze und Überflüssige auszusortieren."
https://www.heise.de/hintergrund/Missing-Link-Der-grosse-Plan-10349992.html?seite=all
Neben der rechten Ideologie des "Transhumanismus", den #ElonMusk mit #Neuralink-Hirnchips und ähnlichem vorantreiben möchte, sind Leute wie er auch Fans des Longtermismus, den sie mit ihrem Einfluss zu propagieren versuchen.
'As I have previously written, longtermism is arguably the most influential ideology that few members of the general public have ever heard about. Longtermists have directly influenced reports from the secretary-general of the United Nations; a longtermist is currently running the RAND Corporation; they have the ears of billionaires like Musk; and the so-called Effective Altruism community, which gave rise to the longtermist ideology, has a mind-boggling $46.1 billion in committed funding. Longtermism is everywhere behind the scenes — it has a huge following in the tech sector — and champions of this view are increasingly pulling the strings of both major world governments and the business elite.'
Sowie ein etwas neuerer, deutschsprachiger Artikel:
#Longtermismus #Transhumanismus #Dystopie #KI #KünstlicheIntelligenz #AI #technoFascism #Technology #Tech #Rassismus #Kapitalismus
'Wer tötet effizienter, wer heilt effizienter? #Palantir will für alles die Antwort sein, das machte das Datenanalyseunternehmen auf seiner Artificial Intelligence Platform Conference (AIPCon) deutlich. In Tolkien-Ästhetik, mit ineinander verschlungenen Ringen und mit dem rot leuchtenden Schriftzug „There are no secrets“ – ein Versprechen, das bei Konkurrenten, Gegnern oder Kritikern als Drohung aufgefasst werden kann. CEO Alex Karp verteidigte dabei offen die Rolle seines Unternehmens in tödlichen Militäreinsätzen – auf derselben Bühne, auf der Krankenhäuser ihre KI-gestützte Patientensteuerung vorstellten und ein Rodeo-Veranstalter seine Bullenreiter-Analytik. Berührungsängste waren nicht zu erkennen, unabhängig überprüfbare Belege für die vorgestellten Erfolgszahlen ebenso wenig. Kunden aus Militär, Industrie und Gesundheitswesen lobten auf der Bühne die eigenen Palantir-Projekte.'
"Wir unterstützen Kriegsführung und sind sehr stolz darauf"
#AlexKarp #Militarisierung #Krieg #AI #Technology #Daten #USA
LLMs Are Not Intelligent: https://joshbrake.substack.com/p/llms-are-not-intelligent
It is a deep rabbit hole.
There is a healthy sarcasm in your toot. You and I know that it is just a Pyramid Scheme.
Microsoft now lets admins uninstall Copilot on enterprise devices via a new policy after April 2026 updates ⚙️
The change follows halted auto-installs and past data exposure issues, improving admin control over AI features and data risks 🔐
#TechNews #Microsoft #Copilot #Windows #Windows11 #EnterpriseIT #Intune #SCCM #Privacy #Security #AI #DataProtection #FOSS #OpenSource #Cybersecurity #Compliance #ArtificialIntelligence
RE: https://mstdn.social/@hkrn/116472030680794257
Agentic AI didn't do a whoopsie.
It chose to do things it should not have. 🙄
And a bad architecture choice... blammo.
Pretty sure lawyers are looking over fine print right now.
"James Joyce preferred it to quotation marks, which he sneered at as 'perverted commas.' Nabokov — maestro of almost every punctuation mark — deployed it like a jazz musician. Faulkner, Fitzgerald, Plath, Zadie Smith: all on its side.”
I'm with Kev here — I'm with him in everything he says here.
#EmDash #SiliconValley #AI #language #writing #literature
/3
"Emily Dickinson so thoroughly owned the mark that biographers now speak of the 'Dickinson dash' — her first editors, in 1890, quietly deleted most of them to make her seem more ladylike, an act of vandalism successive generations of scholars have spent a century undoing.
Virginia Woolf used the em-dash to splice consciousness."
#EmDash #SiliconValley #AI #language #writing #literature
/2
"The em-dash was not invented last November in a Silicon Valley server farm. It has been a staple of English prose since roughly the seventeenth century, and a darling of the literary canon for nearly as long. Laurence Sterne built Tristram Shandy on it. Lord Byron reached for it to grieve."
~ Chitown Kev
#EmDash #SiliconValley #AI #language #writing #literature
/1
https://www.dailykos.com/stories/2026/4/26/800028011/community/abbreviated-pundit-roundup/
OpenAI ist ein Verlustgeschäft mit kolossalem finanziellen Ungleichgewicht
https://torbenkopp.com/openai-ist-ein-verlustgeschaeft-mit-kolossalem-finanziellen-ungleichgewicht/
#openai #ki #kunstlicheintelligenz #samaltman #chatgpt #writing #literature #belletristik #literatur #books #press #markets #ai #technology #science #artist #theatre #nature #gaming #business #linux #philosophy #humanities
The bright #LLM future, next part.
git.gentoo.org is now effectively dead, being DDoS-ed by almost a million different IPs every day. Most of them are just performing a single request at a totally random URL. How are people supposed to deal with that? How can we distinguish a legitimate user who hit some URL from a scraper that distributes its operations over thousands of IP addresses?
If you use LLM crap, you're part of the problem. You support these bastards. You should be ashamed of yourself.
LLMs are models, and they want to take on a certain shape. The more you try and push them out of that, the more you'll get issues later.
The reason why you get issues is because while you can tell them how to do something, *they will not remember*, even if you put it in something like claude.md or memory.md, they actively have to actually read that file to 'remember'.
As things fall out of the context, they'll fall back on the model direction instead of what you told them
How #AI is used in hospitals in China. Much of the AI developed and used there are rarely talked about, as they're integrated into the industry to make society run faster and smoother. There's no denying there are fears of job replacement due to this, however, but there are also tangible benefits.
#Deepseek recently released its v4 model which is said to be independent of CUDA?
It's not that AI progress is better in China. It's that the approach is different, and society seem to benefit more as a result as it's not hoarded by the top or gated so that only those with more money have access.
I am pretty grateful, for example, that I can use #Qwen and #Deepseek for my basic needs. Only paid model I have is Gemini and that's for work.
@ChrisMayLA6 Well, the Met might want to rethink this as the #AI might identify those within the force's ranks as lawbreakers! That would be fun, wouldn't it! 😉
@ChrisMayLA6 My moderately sarcastic observation appears to have come true! #Met investigates hundreds of officers after using #Palantir #AI tool #crime https://www.theguardian.com/uk-news/2026/apr/25/met-police-investigates-hundreds-officers-palantir-ai-tool
Great post. I agree that a critical factor in why I'm getting good results from LLM-assisted coding is that _I know this shit_. I flag the model when it's going down the wrong path, and I include hints in my prompt that I know will steer things in the right direction.
If you don't know how to do that, you're getting shit quality output.
"It works today, because the people reading those docs have the engineering expertise to act on them. What happens when they don’t? Honestly, I don’t know. Maybe AI in five years is good enough that it won’t matter. Maybe the problem stays manageable. I can’t predict the capabilities of models in 2031."
https://techtrenches.dev/p/the-west-forgot-how-to-make-things
The bright #LLM future, next part.
git.gentoo.org is now effectively dead, being DDoS-ed by almost a million different IPs every day. Most of them are just performing a single request at a totally random URL. How are people supposed to deal with that? How can we distinguish a legitimate user who hit some URL from a scraper that distributes its operations over thousands of IP addresses?
If you use LLM crap, you're part of the problem. You support these bastards. You should be ashamed of yourself.
RE: https://infosec.exchange/@patrickcmiller/116467048166124126
This looks tasty. Smells tasty. Jessica Lyons writes convincingly, and amusingly, about AI – "nothingburger", and so on.
Let's take a bite! It's possible that Lyons fell victim to clickbait from an imposter with her story here – an "exclusive" with supposed victim Turshija (Boris Vujičić):
<https://www.theregister.com/2026/04/23/job_scam_targeted_developer/> | <https://web.archive.org/web/20260423222731/https://www.theregister.com/2026/04/23/job_scam_targeted_developer/>
Eagle-eyed commentary: <https://forums.theregister.com/forum/1/2026/04/23/job_scam_targeted_developer/#c_5266435> – if I'm not mistaken, the tale told exclusively to Lyons by Boris Vujičić was previously told by Adib Hanna.
So.
Can we, should we, believe everything that we read?
Food for thought burger.
Cc @patrickcmiller @Kierkegaanks @simonzerafa
Anthropic Mythos shaping up as nothingburger https://www.theregister.com/2026/04/22/anthropic_mythos_hype_nothingburger/
RE: https://hachyderm.io/@liztai/116461957884227955
A little reminder that the Western world (read 'White people") is not the entire world and that things can be perceived very differently elsewhere.
Let's decolonize our brains.
This is one of the more useful agent engineering posts this month. Google’s AI Agent Clinic mapped concrete failure modes and fixes for production rollouts. We analyzed the implications for delivery teams: https://go.aintelligencehub.com/ma-googleaiagentclinicde #AI #AIAgents #MLOps #DevOps
I remember when paying for support professionals who gave accurate answers was a thing.
It was me. I was the support professional.
Every time I see shit like this it makes me glad I left tech. #ai
I’m not really comfortable feeding real data into AI. So I wrote a little tool that tries to replace personal information offline. How do you handle this issue? Do you use a smart tool? Or YOLO 😉 If there’s interest, I’ll keep developing it. https://echo.apperdeck.com/ #anonymizer #privacy #ai #chat
Governing Generative AI: Epistemic Risks in Knowledge Production and Decision Making
Submission open date: 1st July 2026
Submission deadline: 31st December 2026
All manuscripts should be original and not under review elsewhere. Submissions will be peer-reviewed in accordance with Technological Forecasting and Social Change's standard policies.
"Generative Artificial Intelligence (AI) is not just a technological capability, it is also a socio-technical governance challenge that demands systemic inquiry. This Special Issue shifts attention from generative AI as a standalone tool or application to generative AI as an epistemic technology that reshapes knowledge production, decision-making, and coordination across organizational, market, and public-sector settings. By bringing together perspectives from technological forecasting, innovation studies, organizational research, and socio-technical systems theory, this Special Issue aims to explore how epistemic risks emerge and propagate within complex systems, and how overreliance on AI-generated outputs, manifesting as automation bias, deskilling, and the erosion of human judgement and oversight, can amplify these risks in knowledge and decision process, and how governance mechanisms can be designed to anticipate, manage, and mitigate their long-term societal consequences. Contributions will advance theory and provide forward-looking insights for scholars, policymakers, and practitioners concerned with governing generative AI in an increasingly uncertain and algorithmically mediated future.
…"
This week the Court of the Tangerine Tyrant accused the Chinese of stealing the intellectual property of US AI-labs 'on an industrial scale'...
I mean really you couldn't make it up.... the development of AI has been built on the wholesale use & theft of intellectual property as part of its 'training' - I guess no-one has ever pointed out to them the old truism: 'live by the sword, die by the sword'.
It's just one more incidence of the US' rampant political hypocrisy!
🚨 Ex-CEO, ex-CFO of bankrupt AI company charged with fraud
「 The former chief executive and chief financial officer of iLearningEngines, which provided AI-driven business automation technology, were indicted on charges they defrauded investors and lenders by fabricating "virtually all" of the now-bankrupt company's customer relationships and revenue. 」
APPLE NEWS: NEUES MACBOOK ULTRA, IPHONE 18 PRO LEAKS, SIRI-UPDATE, OLED-IPAD AIR UND MEHR
.
.
.
#apple #applenews #macbookultra #macbook #iphone18 #iphonefold #siri #gemini #appleintelligence #appleleaks #technews #appleupdate #ios27 #foldableiphone #oled #ipadair #macmini #visionpro #ai #applegerüchte
"I'm not religious, but for the love of God: get some perspective."
Postscript: Bruce Simpson, Ph.D. blocked me. Thank God!
<https://mstdn.social/@happinessbot/116459153611030227>
So, it now seems that McKinsey & Cambridge Econometrics analysis of the likely environmental impact of the build-out of AI-related data centres in the UK was off by around ten times; that is, new (independent) projections suggest (and now adopted by the Govt.) that the environmental impact (via carbon emissions) is likely to be TEN times what the consultancies predicted.
(water usage predictions have also been raised)
Funny that...I wonder who else McKinsey work for???
#AI #DataCentres
h/t FT
Im Fall von #KarinPrien, #VerenaHubertz und #JuliaKlöckner mag Dummheit im Spiel gewesen sein.
Aber verallgemeinernd anzunehmen, daß #Phishing-Opfer dumm seien, ist falsch.
Natürlich sollte man niemals PINs oder Passwords rausgeben.
Doch ist Phishing mittlerweile oft extrem gut gemacht. Vielleicht auch dank #AI. Wer glaubt, so schlau zu sein, daß es ihr nie passierte, wird vielleicht eine böse Überraschung erleben.
AI/ML Security
<https://openssf.org/groups/ai-ml-security/> @openssf @linuxfoundation
"This working group is situated at the intersection between security and artificial intelligence (AI). We explore the security risks associated with Large Language Models (LLMs), Generative AI (GenAI), and other forms of artificial intelligence and machine learning (ML), and their impact on open source projects, maintainers, their security, communities, and adopters. Furthermore, we explore using AI and ML to strengthen the security of other open source projects.
This group in collaborative research and peer organization engagement to explore topics related to AI and security. This includes security for AI development (e.g., supply chain security) but also using AI for security. We are covering risks posed to individuals and organizations by improperly trained models, data poisoning, privacy and secret leakage, prompt injection, licensing, adversarial attacks, and any other similar risks.
This group leverages prior art in the AI/ML space,draws upon both security and AI/ML experts, and pursues collaboration with other communities (such as the CNCF’s AI WG, LFAI & Data, AI Alliance, MLCommons, and many others) who are also seeking to research the risks presented by AL/ML to OSS in order to provide guidance, tooling, techniques, and capabilities to support open source projects and their adopters in securely integrating, using, detecting and defending against LLMs. …"
Morning, all,
why am I not surprised Grok produced the worst results of the GenAI pack...
More of an Artificial Dumbass than an Artificial Intelligence.
#AI #AIslop #LLM #Broligarchs #TechBros
https://www.bbc.co.uk/news/articles/clyepyy82kxo
🚫🧠 I believe that Artificial Imbecility tried to generate an image of "backdoor creation" or "back-end implementation"
#AI #generativeAI #AIslop #ArtificialImbecility #AIart #backdoor #backend
Greenhouse gases from data center boom could outpace entire nations
Plants from OpenAI, Meta, xAI, and Microsoft could emit more than 129M tons annually.
#ai
https://arstechnica.com/ai/2026/04/greenhouse-gases-from-data-center-boom-could-outpace-entire-nations/
I'm married, I haven't been #dating in over two decades
Just trying to imagine being a young person in this #socialMedia #scam-addled world feels like it has to be the most dreary hellscape for finding #romance
So when I read this headline, I have multiple levels of "oh hell no" going on in my head
' #Tinder takes action against #AI profiles by making users scan eyes for “proof of humanity”'
AI Slop on YouTube
The amount of #AI Slop on #YouTube is growing by leaps and bounds. Particularly awful are the fake stories, AI purporting to show images and/or voices of actual persons still living (or now deceased), and similarly putrid, rotting, garbage AI. When you look at the comments on most of these, you will typically see a mix of viewers taken in by the AI crap, and others viewers pointing out the AI Slop in suitable terms. My policy is to downvote every AI video on YT that I stumble across, for the little good it does. Many of these videos have relatively few views, some however have a considerable number. But since they can be churned out so quickly, their creators attempt to make up in volume what they can't get in individual video views.
Google doesn't care either way. A click is a click. My suspicion is that #Google (as a firm) would be happy if there were NO human creators and they could create ALL YT videos via AI Slop by themselves. Think of it, no creator to pay their usual pittance, keep 100% of the ad revenue!
The most important internal training film at Google these days
must be:
"The Life of an AI Slop Video"
This story from @thetyee broke while our Standing Senate Committee on Transport And Communications was meeting to conduct hearings on generative AI. I was able to sneak in a related question at the tail end of the meeting. https://youtu.be/0HAgoeteUcI?si=0SfsXU5owPeXuPC0 #ableg #cdnpoli #TRCM #Canada #ForeignInterference #YouTube #AI #separatism
AI might feel like a trend right now, but the real value comes from mastering the fundamentals 🤖
In this short, our Developer Advocate @dianatodea sits down with Xavki to break down why going deep on core AI concepts matters more than chasing every new tool 🚀
If you're building in #AI #MachineLearning #DataScience, this is a perspective worth keeping in mind
Watch now 👇
https://bit.ly/48ggfCc
From snappy to sluggish: Pixel users describe post-update performance nightmare
Is your Pixel phone suddenly slow and laggy? You're far from alone.
https://www.androidauthority.com/pixel-slow-and-laggy-after-update-3660761/
#Tech #Technology #TechNews #AI #Gadgets #Software #Cybersecurity #Apple #Google #Microsoft #Startup #OpenSource #AndroidAuthority [Android Authority]
Is your company looking into starting to use #AI?
I'd like to offer my services. I can be confidently wrong and I can type pretty quickly. Just send a DM and we can negotiate pricing.
Die Regierungen verstärken ihre Bemühungen, VPNs und Satelliten-Internetverbindungen zu blockieren.
https://torbenkopp.com/zensurbehoerden-viel-mehr-gezielte-sperrmassnahmen/
#zensur #censorship #vpn #internet #privacy #datenschutz #satellite #writing #literature #belletristik #literatur #books #press #markets #ai #technology #science #artist #theatre #nature #gaming #business #linux #philosophy #humanities
Meta zeichnet Klicks, Tastenanschläge und Bildschirmaktivitäten seiner Mitarbeiter auf, um KI-Agenten anhand von realem Arbeitsverhalten zu trainieren.
https://torbenkopp.com/meta-trainiert-ki-um-mitarbeiter-zu-ersetzen/
#meta #facebook #ki #ethik #arbeitswelt #privacy #datenschutz #arbeitsrecht #writing #literature #belletristik #literatur #books #press #markets #ai #technology #science #artist #theatre #nature #gaming #business #linux #philosophy #humanities #ethics #workforce
Neues Tutorial auf: Ein Tastenkürzel, das jeden Text auf
deinem Rechner umformuliert.
E-Mail zu grob? Hotkey drücken, freundlicher.
Absatz zu lang? Hotkey drücken, gekürzt.
Ins Englische? Hotkey drücken, übersetzt.
Funktioniert in jedem Programm - Mail, Browser, Editor. Und das Beste:
Alles läuft lokal auf deinem Rechner. Kein ChatGPT-Abo, keine Daten
in der Cloud.
https://rueegger.me/lokale-ki-auf-ubuntu-26-04-tutorial-fur-einen-clipboard-rewriter-mit-gemma-3/
#Linux #Produktivität #Datenschutz #privacy #snap #linux #ai #privacy #tutorial
BREAKING! Meshcore team splits over dispute over AI-generated code disclosure, and hostile trademark takeover.
Meshcore is an off-grid, decentralised mesh radio platform powered by low-cost and public access LoRa radio technology for reliable, long-range emergency text and embedded sensors communication. It can communicate across kilometres — no towers, no subscriptions, no single point of failure.
https://blog.meshcore.io/2026/04/23/the-split
#meshcore #meshtastic #lora #radio #opensource #foss #drama #privacy #security #selfsovereignty #ai #copyright #takeover
Both Meta & Microsoft have said they're shedding staff explicitly to free up cash flow to invest in AI;
on one level this is unemployment linked to technology, but its a bit different from *actual* technological unemployment - the latter sees people losing jobs due to the deployment of technology to do their jobs. Microsoft & Meta on the other hand are sacking people to take a (bigger) punt on a business strategy that is yet to prove its transformation of productivity.
MIRA & SOUL : donne une mémoire ET une identité à tes agents LLM
https://devbyben.fr/blog/mira-et-soul-la-memoire-et-lidentite-pour-tes-agents-llm
RE: https://weird.autos/@rootwyrm/116454052670417305
Well, IBM did name it after their founder, Thomas J. Watson, who was decorated by Hitler for services to the Third Reich, so maybe killing 50% of patients was a feature, not a bug?
🤷♂️
#IBM #Watson #Hitler #Holocaust #AI
Perhaps the most offensive thing to me of all, is that these LLM-loving AI-boosting idiots who perpetuate falsehoods like 'AI can read minds with fMRI' and 'AI can magically find vulnerabilities' and 'AI will cure cancer' is that we have known this is bullshit for years.
Years.
IBM Watson was launched as a magical cure-all in 2013. By 2022 it had been pulled from everything because despite years of 'refining' and 'training,' following it would have killed over 50% of patients. At it's best.
IEEE Spectrum: AI Is Insatiable
And it’s got the munchies for memory chips
IEEE: "...AI is a resource hog. AI electricity consumption could account for up to 12 percent of all U.S. power by 2028. Generative AI queries consumed 15 terawatt-hours in 2025 and are projected to consume 347 TWh by 2030. Water consumption for cooling AI data centers is predicted to double or even quadruple by 2028 compared to 2023...." #AI #climateemergency
Bon c'est très long, à écouter peut-être en mode podcast (mais c'est aussi très dense, restez attentifs).
En tout cas c'est absolument passionnant et évidement après on n'a plus qu'une envie, haïr encore plus le capitalisme !!
F.Lordon chez Elucid
https://youtu.be/Yu-wqYCyInU
En particulier c'est mon dada
• la crise économique de l'ia de 0:44' à 0:55'
• les conséquences politiques de l'ia de 1:50' à 1:55'
#Economie
#Capitalisme
#ia #ai
#NightmareOnLLMstreet
#TremblezBourgeoisCetteCriseEstLaVotre
RE: https://tldr.nettime.org/@tante/116454270791245630
been thinking about #DavidGraeber a lot lately. i think the Venn Diagram of people trapped in #BullshitJobs and aisloppers is a circle.
i would bet you that if people could earn a living without a bullshit job ―and, yes, a lot of tech work is bullshit― they’d have whole careers and lives outside of this capitalist hellscape.
capitalists are using #AI to gaslight us into believing we need them as the middlemen between our lives and a life worth living.
it’s a lie.
cut the middlemen out.
RE: https://mastodon.social/@arstechnica/116454585508416682
BILLIONAIRES NEED #AI FOR #ECOCIDE
because ecocide is #genocide at a mass scale.
#techbros are a malthusian death cult. instead of letting go of their billions to end poverty, they want to exterminate most of us because we are an “overpopulation” problem.
no, it doesn't make sense for these people to accelerate the #GlobalWarming environmental collapse scientists been warning for decades. it will affect them too.
they believe they will survive.
that’s why all death cults are suicidal.
Greenhouse gases from data center boom could outpace entire nations
Plants from OpenAI, Meta, xAI, and Microsoft could emit more than 129M tons annually.
https://arstechnica.com/ai/2026/04/greenhouse-gases-from-data-center-boom-could-outpace-entire-nations/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social
OpenAI has the governance structure of a unicorn - it does not exist
https://readuncut.com/open-ai-has-little-effective-governance/
#news #tech #technology #AI #openai #chatgpt #aislop #security
Unauthorized group has gained access to #Anthropic’s exclusive cyber tool #Mythos, report claims
Anthropic's Mythos AI model sparks fears of turbocharged hacking:
Cyberdefenses could be exposed faster than fixes could be deployed.
Anthropic’s new Mythos AI model is raising concern among governments and companies that it could outpace current cyber security defenses, turbocharge hacking, and expose weaknesses faster than they can be fixed.
🔓 https://arstechnica.com/ai/2026/04/anthropics-mythos-ai-model-sparks-fears-of-turbocharged-hacking/
#anthropic #ai #mythos #hacking #week #it #itsecurity #cybersecurity #weekness
Security — Anthropic's super-scary bug hunting model Mythos is shaping up to be a nothingburger - Hackpocalypse deferred
Anthropic's Mythos model is purportedly so good at finding vulnerabilities that the Claude-maker is afraid to make it available to the general public for fear that criminals will take advantage. But early analysis shows that Mythos may not be as dangerous as some would have you believe.
🔓 https://www.theregister.com/2026/04/22/anthropic_mythos_hype_nothingburger/
#mythos #anthropic #claude #danger #ai #aislop #mythos #slop
“Thousands of CEOs admit AI had no impact on employment or productivity” yea it wasn’t really its purpose anyway, its more about layoff plans and putting pressure on the workers left
https://fortune.com/article/why-do-thousands-of-ceos-believe-ai-not-having-impact-productivity-employment-study/ #AI
#Mozilla Used #Anthropic’s #Mythos to Find and Fix 271 Bugs in #Firefox
https://www.wired.com/story/mozilla-used-anthropics-mythos-to-find-271-bugs-in-firefox/
Technological developments (of with Artificial Intelligence & its associated technologies) is merely the most recent, especially in the information age, have prioritised speeding up over other measures of success...
But, actually humanity is all about things taking time, allowing for reflection on direction(s) taken & friction(s) that may prompt different thoughts & solutions.
Is the biggest threat AI poses the draining of such possibilities from working & social relations?
#Gentoo is still one of the bright outposts in #FLOSS where human work is valued and #LLM contributions are banned. However, sometimes I feel that this matters very little.
After all, Gentoo is a distribution. While it has its own value, it cannot exist without all the software it is shipping. It makes no sense in isolation.
And let's be honest, I don't think you can avoid slop today. We are trying our best to sieve out the worst: the copywashing chardet, the vibecoded NIH Perl crypto packages… but it's just that.
As someone who bumps Python packages, let me tell you this: LLMs are omnipresent. I notice Claude in commit logs, I notice the blasphemy of agent instructions all over the place… and there's probably much more than I don't notice. With many core components giving in, you can't avoid it without literally freezing on old, vulnerable versions, or spending hours looking for alternatives or creating them.
FLOSS is dead. People don't care. They don't have conscience. All they care about is the sick idea of "productivity", i.e. generating more slop.
The few of us who do care can do very little. We will continue doing our best until they kill us (as they're literally slowly killing the whole humankind). But that's it. Maybe it will pass once the bubble pops, maybe it won't. Either way, the damage is beyond repair. We will never be able to trust one another like we did. We will never again be a community building a better world.
It's just like everything nowadays. It's hard to find a good washing machine (one that will actually be repairable), good shoes (that won't fall apart shortly after the warranty expires), good food. You need lots of money, and even then you have to sieve through all the scammers who just sell the same shit with higher profit margin. #OpenSource is just another branch of business where people are trying to "sell" you shit, and don't care anymore if it explodes in your face. They don't even care if they're actually making a profit.
"Studies show that overreliance on these digital tools causes cognitive decline, but if current events are any indication, nobody’s making much of a contribution anyway. Go ahead and use A.I. however you like."
I don't think i've ever read a better article about #AI. Every word, sentence and paragraph is perfect.
KServe
https://kserve.github.io/website/
#machine learning #kubernetes #model serving #inference #AI #ML #serverless #MLOps #model inference #generative AI #LLM #AI model deployment
RE: https://mastodon.sakura-star.net/@KitsuneofInari/116336733010511672
Oh my, j'adore cette remarque, pourquoi il faut refuser le code généré par ia :
« Dire " tant que le contributeur comprends ce qu'il fait, ça va ", est un mensonge : aucun programmeur ne comprend vraiment pleinement le code sur lequel il travaille, sinon Bugzilla serait vide »
😂 🤣
C'est tellement parfait !
(ceci dit avec tous l'amour et le respect que j'ai pour mes ami·e·s dev
)
#ia #ai
#NightmareOnLLMstreet
#Noai
#Tech
#DevOps
illegal instruction boosted
宮城巴惠
[he/him/she/her/they/them/whatever] » 🌐
@KitsuneofInari@mastodon.sakura-star.netKrita’s Maintainer is awesome!
Deepfakes. Täuschend echte, KI-generierte Videos, in denen Prominente Dinge sagen oder tun, die sie nie getan haben, überschwemmen das Internet.
https://torbenkopp.com/deepfakes-youtube-will-prominente-mit-ki-schuetzen/
#youtube #deepfake #ki #writing #literature #belletristik #literatur #books #press #markets #ai #technology #science #artist #theatre #nature #gaming #business #linux #philosophy #humanities #privacy #datenschutz #dataprivacy
Bye-bye, Microslop. You won't be missed. 👋 #Microsoft #Microslop #Windows #AI #AISlop #AIAct #BanAI #BigTech #EU
Fantastic post about the costs of "speeding up" dev with #AI
"...a lot of this AI-generated code? Nobody fully understands it. The person who "wrote" it didn't really write it. They prompted it, skimmed it, maybe ran it once. When it breaks in production at 2am, the person on-call didn't write it and the person who prompted it can't explain it...Your bottleneck is the org chart, and no amount of Copilot is going to refactor that."
New study: "In over 80% of cases the [tested] #LLMs claimed that a retracted article had not been retracted…LLMs have little ability to distinguish between valid and retracted studies, unless they are allowed to, and do, check online."
https://arxiv.org/abs/2604.16872
RE: https://flipboard.com/@independent/news-a3lhl24rz/-/a--Rd7LZuGST2GATmXSjdNrA%3Aa%3A1855170754-%2F0
ummmmmm… not to be dramatic but…
two of the wettest states in this country, Georgia and Florida, are dealing with wildfires? in April? as in the supposed wettest month of the year (as in April showers)?
tell me again how investing in the inevitability of #AI and parasitic #dataCenters isn’t the petromafia’s way to shred the Paris Agreement and keep the #gas and #oil spigots flowing.
Wildfires across Georgia and Florida have destroyed nearly 50 homes and are forcing evacuations
https://www.independent.co.uk/news/world/americas/florida-jacksonville-georgia-atlanta-smoke-b2962971.html?utm_source=flipboard&utm_medium=activitypubPosted into News @news-Independent
“We conduct randomized experiments to study how developers gained mastery of a new asynchronous programming library with and without the assistance of AI. We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average.”
- Anthropic study #AI
#Anthropic investigating unauthorised access of powerful #Mythos #AI model - Financial Times
Subscribe to unlock this article 👎🙁
https://www.ft.com/content/56d65763-69fe-4756-baf4-c8192b7aadaf
Startups Brag They Spend More Money on AI Than Human Employees
https://www.404media.co/startups-brag-they-spend-more-money-on-ai-than-human-employees/
#news #tech #technology #AI #aislop #nvidia #startups #business
Oh good, Claude Desktop on MacOS silently and continually whitelists browser extensions that aren't installed yet on browsers that aren't installed yet that Anthropic says it doesn't support yet.
#AI #Privacy #InfoSec
Anthropic secretly installs spyware when you install Claude Desktop — That Privacy Guy!
Claude Desktop changes software permissions without consent
https://www.theregister.com/2026/04/20/anthropic_claude_desktop_spyware_allegation/
#news #tech #technology #AI #claude #aislop #nvidia #apple #macos
(This is a bit of a merger of two talks I recently gave about fascism and AI. One was in German at the Cables Of Resistance conference, one in English at the Milton Wolf Seminar on Media and Diplomacy. I added some shots of the slides I used as a structure for the text which might make it look a bit weird. You can just ignore the images if you want to. They are kinda like subchapter marks. The text is not exactly what I said but a longer version of my arguments that should be easier to read.)

Our world and our access to it is increasingly structured through technological mediation: Digital platforms and systems are a massively important aspect of not just our work environments or our interactions with government entities or “the media” but also our individual interactions with one another. Our world is built around technological infrastructures that define what we see, who we can talk to and what information gets presented to us.
We also live in a time of growing fascist threats all over the planet: Many countries have neofascist movements and parties trying to gain power and potentially even get conservative parties to include them in governments. Some even have had success. Fascism is back with a vengeance. (Antifascists have been warning for decades but that realization sadly doesn’t help anyone. Maybe after we’ve gotten rid of the fascists we can learn something from that.)
And of course we are living in the “AI” age, where stochastic systems with attributed agency are being pushed – currently under the moniker of “agentic AI” – into all our professional and personal workflows. “AI” is the singular focus of the tech sector currently and the magic technology that governments and companies are putting all their hopes into for figuring out how basically keep late stage capitalism going on for a bit longer.
In this text I want to analyze the relationship of fascism and what is called “AI” these days. Is this “technology” that keeps being used to reshape the world around us (for better or worse but dominantly worse) in some way connected to fascism? Or is it just something fascists like to use? Is it neutral?
When we think about fascism we often do that by looking at the actors: Evil individuals doing evil things for evil reasons.



And that is often how we are looking at the relationship of “AI” and fascism: We see Trump’s White House and other parts of his administration using generative “AI” to create openly fascist propaganda about their leader, using “AI” to manipulate photos to make their opposition look bad and using “AI” to in generally increase the amount of racist and fascist media in the world.
Palantir’s CEO Alex Karp has for about 2 years been unable to talk about anything but how he wants to use his data integration platform (Palantir’s product is quite boring TBH) to kill people. And not just random people. He frames himself and his software engineers as warriors for “the West” who are protecting the USA and “the West” against “the enemy”. Palantir openly wants to be a critical part of the military’s infrastructure that makes “kill decisions”, wants their software to be treated and seen like a weapon – which aside from being a very fascist appeal to the normalization of violence – also can be read as a sales pitch trying to bolster the software’s capabilities and power: Nobody will spend billions on simple data integration. But if it can “kill your enemies” maybe the contracts will keep rolling in?
And finally we have people like Marc Andreessen who last year published a “Techno-Optimist Manifesto” which – in contrast to what the title points at – is mostly a document based on his demand to not be regulated or laughed at by people smarter than him. But it’s not just the somewhat reductionist views on what “AI” and other technologies he is invested in can do and will do for the world, the document is remarkable because it directly and openly quotes and bases its reasoning on the writings of Italian fascists and other right wing reactionaries such as Nick Land and because it explicitly marks “the enemies”: The communists, the luddites and those who want to regulate tech. Basically the go-to enemies of fascism since its inception.

This realization of a “capture” of tech or specific technologies (like “AI”) by the right sometimes leads to people wanting to “save” or “take back” those technologies. Because they are so deeply embedded into our lives, because we’ve gotten so used to them that conceptualizing a life without them seems impossible. We like our apps and convenience. And it’s not those apps’ and technologies’ fault that fascists keep using them. Maybe if the left stopped criticizing “AI” (or the Metaverse or Blockchain or whatever) then we could make “AI” good and ethical and democratic? Maybe we can save those technologies from the bad people? Lead it back into the light? Maybe if we made it Open Source?

In his influential 1980’s paper “Do Artifacts Have Politics?” Langdon Winner argues that this view of “neutral technology” does not hold up. That the politics of specific artifacts do not just come from who uses the technology and for what purpose but that technologies have built-in politics that stem from the political views and goals of the people building the technology as well as their internal structure.
He shows this by pointing at how certain bridges were built racist: When the civil rights movement in the US got black kids the right to go to the often better schools that used to only accept white kids, politicians did for example plan roads and bridges in a way that the buses that were supposed to take the black kids to the white schools could not pass the bridges and roads. This was not oversight but design intent. The racism is built into the structure of the artifact itself.
Winner also argues that certain technologies imply certain political or social structures in order to exist: The nuclear bomb implies not just scientists who can build it and a state thinking that that form of destruction is a valid form of acting in the world but also a security state capable of controlling and defending it. You simply cannot build a nuclear bomb without those structures, they are implied if not required, enforced by the artifact itself.
Winner’s work does not argue that the embedded politics of an artifact are always absolute: We do know of many potentially oppressive technologies that have been taken by artists and activists to turn them against their original use. But that is always an uphill battle: Surveillance will always lean towards a more forceful, rigid, less free understanding of government for example. You can use (counter-)surveillance of course but you always have to be aware of not reproducing the logic you are trying to criticize or attack.
Drawing from Winner’s insight the question emerges, what the embedded, structural politics of “AI” are? What world, what view of the world, what politics does “AI” require or imply? What’s the path that “AI” as we understand it today put us on?

Before we dive into this I want to quickly talk about the definition of the term “AI”. I do not think that “AI” is a very useful term – TBH I would mostly advise against using it in general because it clouds more that it explains or makes clear. But still the term is everywhere so we have to deal with it. And one important realization about “AI” is that it’s not a very well-defined term: “AI” can be an LLM (a stochastic token extruder), a system of symbolic knowledge representation, an Excel macro, a person in a call center in India or just a slide in a pitch deck. “AI” doesn’t mean anything specific. At least not a specific type or class of technical artifacts.
I am a big fan of Ali Alkhatib’s definition of AI:
I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power. Projects that claim to “democratize” AI routinely conflate “democratization” with “commodification”.
Ali Alkhatib
“AI” is a political project – I have also sometimes called it a narrative – whose purpose is the shifting of power and agency away from people and organizations towards centralized power structures. These centralized power structures are currently mostly a handful of big tech corporations and the “AI Labs” they keep shoveling money into.
So while I don’t think that “AI” is a great term to use, we will keep using it for the rest of this text in the understanding that dominates the term right now: In that reading “AI” stands for a class of stochastic machine learning systems that can store and apply patterns extracted from data in order to do either pattern recognition (think computer vision) or (and that is the dominant narrative vehicle today) as generative systems (“generative AI” or “genAI”). So when I write “AI” think ChatGPT or Claude or Gemini or Deepseek, etc.
So, back to fascism.

There is of course a huge load of research and analysis from media studies and related fields about the fascist use of “AI”: I specifically recommend Gareth Watkins’ essay “AI: The New Aesthetics of Fascism“. It it Watkins shows that there are properties in the structure of the output from generative image extruders that align well with the politics and reasoning of the right.
“AI” is built by scraping the Internet and any other data source one can find and most of that data is heavily racialized, is based on a colonial, sexist, heteronormative understanding of the world and the past. There literally is no police data that’s not racist. If you base your image generator on the images available, LGBTQIA representation, representation of people not conforming to the social expectations of acceptability is lackluster at best for example.
And all that data does by definition exclude: “AI” is not built on “all of humankind’s knowledge” but based on whatever a mostly western view of the world and what is relevant looks like. Cultures who are not within that framework, who might even be based on more oral forms of keeping history and knowledge are not represented. Even if those groups are not actively excluded (which again they very often are) there are huge populations who just are not seen by the data do not get a say in how they are represented. Or if they are represented it’s just as problems: Think about unsheltered people for example.
The right loves those patterns because they confirm their prejudices: Ask an image generator for a picture of two people kissing and you most often get a heterosexual couple, often white. Because that’s what the training data looks like. That makes “AI” perfect for creating the form of idealized, fictional “past” that fascists love to allude to (“make America great again“), a past that never existed but that needs to be saved or restored (we’ll get back to that later).
But there is another aspect of “AI” usage that fuels the right’s enjoyment of using “AI”: It hurts the people they want to hurt. “AI” is currently mostly used to generate media (think images, illustrations, music or text). But traditionally people in those creative industries are more left leaning, more inclusive. Fascists just can’t create good and interesting art. Using “AI” to take that groups’ jobs, their livelihood, their creative expression away is exactly why using “AI” to create an image is so enjoyable by right wingers: It’s a vulgar display of power.
And this perfectly leads us to looking at the structural properties of “AI”. Because while a lot of the usage feels like it might align more with right wingers, there are even deeper fascist tendencies within “AI”.

Modern “AI” systems of any relevant capability exist because of violence, are based on violence.
Violence is more than just hitting people. Taking away people’s agency is violence, exposing people to suffering is violence. Violence has many shapes and forms. And “AI” needs an acceptance of endless amounts of violence (I will not be able to list all forms of it, this is just a selection).
In order to train “AI” systems you need data. Lots of it. And then some more. That’s really hard to get legally, especially when a large part of the population does not want their creative works or the data about them to be fed into those machines. The first form of violence “AI” depends on is the violence of data acquisition: “AI” depends on scraping and accumulating all you can get – including taking against people’s explicit will and without their consent. I run iocaine on my server and it’s absolutely crazy to see how many AI scrapers do ignore my stated preferences and just try to take all I have ever written to train their systems. And I am not alone of course. “AI” Labs keep downloading unlicensed works like books, keep digitizing anything they can find in order to feed their data hunger. They know it’s illegal and unethical but that’s irrelevant. “AI” systems exist because of the belief that “if you can download it, you can use it”. It’s the belief that might makes right.
The second form of violence happens during data labeling and cleaning. Workers mostly in the global south have to spend their days looking at the worst shit you can imagine to try to keep torture content or depictions of sexualized violence against people out of the training data. This is not only economic exploitation (they get paid very little for that mentally and psychologically immensely hard job) but also a violence done to their psyche and mental health. Because we are too lazy to look for stock photos some mother in Kenya can no longer sleep due to nightmares.
The third violence we already alluded to: It’s the colonialist, western view to declare whatever the west thought was “worth” digitizing or whatever submits to those logic “all the world’s knowledge”. This is not just lack of representation but a forceful othering of large parts of the population of this planet. If you are not useful to western, capitalist readings of the world then you, your history, your experiences are not part of “all the world’s knowledge”. You are being declared a lesser human being.
The fourth violence is the violence done against marginalized groups using “AI” tools that we are just demanded to accept. This starts with the Trump administration creating racist propaganda using “AI” but it doesn’t end there. The amount of abuse and sexual violence that especially women are constantly being exposed to is staggering. “Nudify” apps, “grok, put her into a bikini” are just the most obvious of those tools. It’s not that most “AI” labs explicitly support those kinds of usage but they are also not limiting usage: It’s just “bad guys” or “usage violating our terms of service” but that’s just a legal defense not taking responsibility for enabling that kind of abuse.
Using “AI” requires normalizing these forms of violence. You need to accept them in order to be able to live with yourself integrating those tools into your workflows.
And this is the connection to fascism. Because one of the core patterns of fascism is the massive normalization of violence. Of establishing violence and dominance as the organizing principle for society. Mostly against “the enemy” (hello Palantir) but also as a form of establishing and maintaining social and political hierarchies.
“AI” shares this structural normalization of violence as a principle with fascism.

I am a big believer in Stafford Beer’s principle that “the purpose of a system is what it does”, that when evaluating systems one needs to look at the actual effects that system has on the world and not its manual or the sales pitch. From that we can pretty easily determine the short-term purpose of “AI”: The destruction of labor power.
This dismantling happens on multiple levels by attacking the foundation of what allows those forms of organization to take place.
The first level is very individualistic: By pointing at “the AI” that can replace a worker that worker is pressured into working harder, not asking for raises or any other improvements of their working conditions. Even though “AI” cannot do your job, the threat itself is useful to employers to undermine your individual power, your feeling of being valuable as a worker.
The second level is about attacking the idea of solidarity and connection: Because “AI” will not replace you (again, “AI” cannot replace the absolute majority of workers!) “but someone using AI will”. This sets up kind of a Thunderdome in which we all have to fight against each other for scraps/jobs. This framing implies that you should not unionize and connect with your fellow workers but that you should see them as your enemies, as the people who will take your job and your ability to provide for your family. We know this dynamic, it’s exactly how the right presents migration as “attack”. It also normalizes violence again turning all of existence into an endless fight against one another (unless you are one of the few people in power of course).
The third level is somewhat more devious. Because it makes us do that form of dissolving of social bonds ourselves. An example: If I use an “AI” to generate an illustration instead of asking a designer I am saying that while my skills and labor has value, that of the designer has not. This implicitly cuts my ability to form connections of solidarity with designers whose work and livelihood I have implicitly declared irrelevant. It makes me put myself over my fellow workers, workers who are facing the same struggles as me, who are my comrades. But no more.
And here again “AI” shares a core idea of fascism: Both are based on the destruction of solidarity and labor power in order to cement the totalitarian control about the centers and sources of power. (This is also a pretty direct connection to Ali Alkhatib’s definition of “AI”.)

The dis-/misinformation discourse has been core to the current crop of “AI” systems since their popularization through OpenAI’s ChatGPT. “AI” systems make it trivial to create all sorts of manipulated pieces of media from modifying existing media to fully generating completely new pieces of media from basically nothing.
This leads to us losing trust in each other (“I don’t believe that, that’s AI”) but also the (infra)structures we have established to reliably produce, verify and spread trustworthy truths: Journalism, Universities and other research institutions as well as our own minds.
“AI” is presented as the ultimate answering machine (or as Karen Hao calls them: “Everything Machines”) and through that logic separates us from any reliable connection to what is in the world: Where journalists and scientists try to create strong links to verifiable real-world phenomena and knowledge about them, “AI” creates something from nothing severing any link that might allow fact-checking and validation.
This again increases our dependency on these systems for making sense because they only “work” by us fully committing to them: Stochastic token extruders do not allow you to follow, trace, analyze and explain any form of reasoning or deduction in order to show weaknesses or inconsistencies: The answer appears from nothing and you can believe if you want. We are replacing trustworthiness with belief.
And who controls the magic belief machine? In 2025 the New York times showed just how aggressively Elon Musk reshapes Grok’s output based on his fascist, Nazi beliefs. (Elon Musk showed the Hitler salute in the context of Trump’s inauguration. I think it’s 100% justified to call him a Nazi.)
“AI” is being shoved into the scientific process (for review as well as the writing of scientific books and papers) as well as education. In spite of studies upon studies showing that using “AI” degrades our cognitive capabilities especially when it comes to critical thinking and problem solving. And this is not subtext. A few weeks ago Sam Altman presented his vision of the Future where “Intelligence” is something you rent from OpenAI: Intelligence and the ability to make sense is actively being taken from you not just to make money but to control your ability to criticize structures of power. This is the definition of epistemic injustice.
In that reading „AI“ is a machine for the creation of epistemic injustice and the replacement of truth with what a tech elite wants it to be in order to control the population. This is a Fascist project that not so subtly aligns with Fascism’s totalitarian will to power and control as well as its reliance in replacing reasoning and debate with belief in power and the leader.

“AI” is the future. No, maybe I should write the FUTURE. The ONLY FUTURE.
Our participation in the introduction of this class of stochastic systems into the hearts of our central political, social and cognitive infrastructures is limited to debating a bit about the how of “AI”, about the “ethics” and maybe “best practices“. About creating narratives to legitimize the introduction and cushion the narrative of the inevitability of “AI”.
But we don’t get to say if at all. “No” is not an option. We don’t get to say that these systems do not in fact produce enough social or even economic benefit to legitimize their energy and water usage, the amount of e-waste they are responsible for or the harms they do to the data labelers, the job market or our common communicative landscape. These systems are forced on us with every little app on your phone and every website demanding that you waste your precious time on this planet talking to their chatbot. People in power, people with money – mostly of them men – get to make the decisions. Regardless of what you want. And that is not the only negative impact on democracy. The other attack is fiercer.
“AI” is being introduced increasingly into government processes: “AI” is promised to bring more efficiency into the administration, is supposed to “reduce bureaucracy”. But bureaucracy is not just an annoyance but one of the central tools that democratic societies have established to realize the core idea of democracy: Transparency in the application of power in order to be able to control said power. Democracy is not just about voting but about ensuring that all power – especially by the state – is used in accordance with the law and in a fair way. Stochastic “AI” systems break that promise. The “AI” just says that you do not get the support you need. No idea why, might be a bug or a deeply racist training data set or something else. Nobody knows. Now it is on you to prove that you are in the right, it is on you to fight for your right because the processes that were supposed to protect your rights are hollowed out in order to make them faster: We are forcing marginalized, disenfranchised people to fight against a black box trained on the data that already contains their disenfranchisement. We’re supposed to live in a world where the computer just gets to say “No”. The computer built, configured and run by a few powerful men.
This is more than the imbalance of power that capitalism usually produces. This is about dismantling central democratic infrastructures that allow the public to keep power in check.
“AI” as a bulldozer against bureaucracy is a wrecking ball for democratic principles. A deeply fascist endeavor.

“AI” is not presented or talked about like a “normal technology”. “AI” does not have to provide clear benefits and tangible results (studies upon studies do for example show that the promised efficiency gains of “AI” are not real but that does not matter). Because “AI” is the future. “AI” is supposed to solve all our problems. The climate crisis? We don’t have to stop driving SUVs: “AI” will figure it out.
Not only VCs like Marc Andreessen present “AI” as the one core building block of our collective future. Even liberal thinkers talk about “AI” somehow creating “abundance” or some other narrative that is in no way, shape or form connected to the actual reality that we are living in. “AI” is build on hopes and dreams and if it’s not working it’s just because we’re not doing enough of it or because we are not believing hard enough.
“AI” is a religious narrative. A story about a form of technologically generated paradise that “we” need to protect from (quoting Marc Andreessen): The Luddites, the communists and those wanting to regulate tech.
In this narrative “AI” connects perfectly to the fascist rhetoric of the “glorious past” that needs to be restored. Where in traditional fascism an invented past that was ruined by democracy and human rights and Marxism, etc. needs to be restored in this reading the religious narrative of the “Singularity” or “AGI” serves exactly the same purpose: It creates a narrative of glory and salvation whose (re)creation is the ultimate, all-overriding goal.
Or as the Nazi Elon Musk phrases it:
You could sort of think of humanity as a biological boot loader for digital super intelligence.
Elon Musk (Nazi)
It’s not just the pseudo-religious rambling, it’s also the disregard for the dignity of human lives. That’s how fascists think: Turning people into means, into objects that have to serve a purpose or need to be destroyed.
This way of thinking excludes each and every one of us from participating in the collective social process of thinking about our needs and wishes, about how we want the future to look like. We are no longer part of that conversation.
The singularity is “MAGA” for nerds. A fascist narrative used to undermine everyone’s right and ability to envision futures.

“AI” as we use the term today is build on fascist ideas. Is structurally supportive of fascist thinking.
That does not mean that every user of “AI” products is therefore a fascist. Some “AI”-users might even consider themselves antifascists. But by using these systems you are integrating inherently fascist logic into your thinking, into your mental apparatus. These systems force you to at least accept the fascist tendencies within “AI” systems as “normal” or “okay”.
And you might think that your individual usage cleanses the “AI”. That you generating an illustration is somehow not accepting the logic and violence of might making right, is somehow not accepting the suffering the the global south, is somehow not accepting the undermining of trust in our society. That somehow you are not structuring the world into people whose rights and demands and needs should be met and others whose are irrelevant.
But you are moving your thinking in that direction, are creating permission structures for inhumane reasoning.
And that leads me to wondering why anyone would want to try to save those technologies and tools. When we are just trying to keep systems around that try to dismantle our humanity, our connection to each other and the democracies we have built.

In our current conversations around technologies “AI” is absolutely dominant. It keeps sucking all the air out of the room for talking about anything else. That is why I wrote this thing about “AI”. But don’t be mislead: Our world is full of fascist technologies. The blockchain crowd is basically living in a fascist soup of inhumane and antidemocratic ideas. A lot of surveillance tech is no better than “AI”. We are having a general problem here.
Because while not all technologies surrounding us, structuring our lives are fascist basically none is explicitly antifascist. None rejects those ideas and is built to reject those ideas and ideologies.
Antifascism is not a radical stance, not every antifascist has to be an anarchist wearing all black all day. You believe in democracy? You are an Antifascist. You believe in human rights? Welcome to the fight comrade.
Some people look towards “Open Source” as our salvation but while I love Open Source and consider the amount and quality of open source infrastructure available to all of us a modern wonder of the world more impressive than the pyramids that crowd also doesn’t want to be “political”. The accepted open source licenses all are built around libertarian thinking, about empowering the individual who does not want to be regulated and limited in their individual expression: Not around the idea of expressing and manifesting political values. Open source licenses do not allow you to forbid using your work in weapons. Do not allow you to limit using your work only for socially beneficial endeavors. Open source is impressive, but tries to be apolitical and therefore does not help us fight back fascism. It wants to stay neutral. But there is no neutrality when standing in a wildfire.

We have to refuse fascism. Have to remove fascist thinking from our hearts and minds. That is our way towards a convivial, humane world. Our path to Utopia. Or better Utopias.
And to achieve that we need to fix the tech that is increasingly structuring large parts of our lives. Need to build antifascist technologies and social structures around the creation of those technologies.
Because if we don’t, we will suffer the consequences. And history has shown how those look when we let fascist gain power.

I want to end this piece with a quote from Brian Merchant’s fantastic book “Blood in the Machine”.
Some machines must be broken, so that they stop producing monsters.
Brian Merchant
The monster in this case being fascism. So there’s only one thing left to say in summary:

Liked it? Take a second to support tante on Patreon!
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
#Playdate becomes maybe the first game platform to ban #generativeAI for art, audio, music, text, or dialogue; AND has instituted mandatory disclosure for AI assisted code in games. #GenAI #AI #StochasticParrots #Videogames https://help.play.date/catalog-developer/ai-disclosure/
Mozilla using Claude Mythos AI Preview to help fix major security issues in Firefox https://www.gamingonlinux.com/2026/04/mozilla-using-claude-mythos-ai-preview-to-help-fix-major-security-issues-in-firefox/
Unauthorized group has gained access to #Anthropic’s exclusive cyber tool #Mythos, report claims - https://techcrunch.com/2026/04/21/unauthorized-group-has-gained-access-to-anthropics-exclusive-cyber-tool-mythos-report-claims/ I'm sure it's all fine... #ai
Jimmy : La CLI minimaliste pour intégrer l'IA dans tes workflow
Découvre Jimmy, une CLI open-source minimaliste et multi-plateforme pour interagir avec les LLMs de ChatJimmy.ai directement depuis ton terminal. Parfait pour l'automatisation et les pipelines CI/CD
https://devbyben.fr/blog/jimmy-une-cli-minimaliste-pour-integrer-lia-dans-tes-workflow
#Datacenters are expensive, unpopular — and could be a tipping point in the #midterms
The strain they place on the physical environment — from energy to environment to aesthetics — has ignited fierce opposition in communities across country. It has become voting issue for many people ahead of the midterm #election2026.
"It has become a kitchen table issue, and it has become a very relevant political issue," said Christabel Randolph of the Center for #AI and Digital Policy
https://www.npr.org/2026/04/20/g-s1-117729/data-center-disputes-local-midterms
RE: https://mstdn.ca/@JEmphatically/116444098590690894
historians have recorded for centuries the culture of #bribery that gave rise to the ancestral sultans, sheiks and caliphs of today’s OPEC countries.
since the USA billion is devalued from a million to a thousand, and there is no need for assets to back a valuation, it has been relatively cheap for them to mint billionaires in their quest to colonize “the West”.
it should not come as a surprise the OPEC petromafia has used investments in #AI #dataCenters to keep the #gas and #oil flowing.
Environment Minister Grant Hunter: We have some of the "highest environmental standards in the world" AND also "the lowest regulatory burden in the world."
Claimed in the SAME STATEMENT.
Also, he says the technology has been used here for decades...no, not at this scale.
And he adds:
We need to keep "Canadian data in our hands." Wonder Valley is a project funded by US and UAE investors. Kevin O'Leary is Trump's best buddy!
Iinished up attending a state preparedness conference this last week. The keynote speaker who has a PhD in '#AI in #EmergencyManagement'. What that entailed was a nearly two hour-long ad for ai and robots. The what now feels old shtick of 'if you don't hop onto the planet-burning bandwagon, you'll be left behind. Robots will run shelters and do all #SAR.
Interesting read on the way LLM bots retrieve pages from a website
Explanations are clear precise and surgical
https://surfacedby.com/blog/nginx-logs-ai-traffic-vs-referral-traffic
#LLM #AI #slop #nginx #traffic #programming #referral #traffic #networking #robots #txt #claude #chatgpt #bing #meta #metaAI
But right now, the #labor force is barely growing because of an aging population, the #Trump admin’s crackdown on #immigration, #AI & #layoffs. That
apparently means that the #economy doesn’t need to add many #jobs each month to remain in balance — but as Lydia DePillis wrote recently, it isn’t a balance that anyone seems very happy about.
#FederalReserve #banking #kleptocracy #Senate #USpol
https://www.nytimes.com/2026/04/03/business/economy/population-growth-immigration.html?smid=nytcore-ios-share
@rysiek
True, Michal, it's funny -- but not in a hah-hah way --that we hear so much from the tech bros about getting left behind in the Ai race, but nothing about getting left behind clinging to fossil fuels instead of switching to renewables.
Especially when the Ai centers are such blatant users of massive industrial quantities of fossil fuels.
#Ai #RenewableEnergy
Our team is moving from Azure DevOps to #Jira. In learning to navigate the new shiny, I happen to hit the very prominently display AI-button “Improve Description” on an Epic. I watch in horror as it starts to omit details and re-write the customer’s detailed description of what they want IT to do here. Luckily, I know this epic because I've been involved earlier. This enables me to catch the subtle changes that significantly alters the intent and meaning of the original text.
The new text is fine, good grammar and 100% plausible sounding, but IT IS NOT CORRECT!!!
Anyone coming in cold to the ticket would not know and then we'd possibly build the wrong thing, or at the very least, waste a lot of time backtracking why the customer claims we're wildly off track.
Scary stuff this #AI.
#Sacrifice your #job for the glorious AI #future
https://disconnect.blog/sacrifice-your-job-for-the-glorious-ai-future/
> How tech #CEOs use the threat of job loss to distract from how #AI is really used against workers
Formidable Will Françis qui en 2'47" compile tout ce qu'on peut dire d’intéressant sur Angine de Poitirine, le rapport à l'art moderne, et l'ia slop.
⤵️
https://www.youtube.com/watch?v=xJpWBSBs9AY
A écouter attentivement, ça en dit sans doute très long sur le monde (de merde capitaliste) dans lequel nous vivons et pourquoi ce genre de nouveauté nous semble si rafraîchissant.
(et le coté DIY foutraque en plus, c'est la cerise pour moi)
#ia #ai
#NightmareOnLLMstreet
#Noai
#AiSlop
#AngineDePoitrine
#Music
#DIYorDIE
Google Makes Image Generation a Little Creepier With Personal Intelligence
Solving a problem no one had.
#Google #Gemini #AI #privacy #surveillance #dystopia #technology
@Adrenochrome I hate that #AI has done this to me but I now find myself questioning every “magical” or “perfect” image I see and there’s something very “off” about this image…
Petite réflexion. Je vois de plus en plus de collègues, qu’ils soient techniques ou non, répondre systématiquement : « Est-ce que tu as demandé au LLM que tu utilises ? »
Cette habitude de ne plus prendre le temps d’essayer de formuler une réponse pose problème. J’ai le sentiment que le savoir n’est plus détenu par mes pairs.
Mes collègues apportent moins, et je ne sais pas si tout le monde s’en rend compte.
RE: https://mastodon.social/@nixCraft/116437600493037498
ALL YOUR PROPRIETARY BUSINESS ARE BELONG TO US
#AI #espionage #IP #intellectualProperty #tradeSecrets
Ash_Crow boostedAtlassian will begin collecting customer metadata and in-app content from Jira, Confluence, and other cloud products by default on August 17, 2026, to train its AI offerings including Rovo and Rovo Dev. The change affects roughly 300,000 customers; metadata collection is mandatory for Free, Standard, and Premium tiers and cannot be opted out on those plans. https://letsdatascience.com/news/atlassian-enables-default-data-collection-to-train-ai-f71343d8
boostedBBC: AI chatbots could be making you stupider
https://www.bbc.com/future/article/20260417-ai-chatbots-could-be-making-you-stupider
Happy Monday! Get a jumpstart on your work week with DevOps'ish 305: Rust ships, Agile dies, nobody has enough GPUs, and more. Subscribe today! https://devopsish.com/305/ #DevOps #Cloud #Kubernetes #AI #Tech #News #Newsletter
Anthropic secretly installs spyware when you install Claude Desktop
https://www.thatprivacyguy.com/blog/anthropic-spyware/
#claude #ai #llm #privacy #cybersecurity #spyware #fuckai #stopai #dataprivacy #anthropic
#OverUnder 062 with @geffrey
Today, he shares his thoughts on #plain-text #notes, #AI & #Music, #LinkedIn, #Loki, and a specific type of #pizzas.
He also replied to @omgmog's question.
Like every week, we got two #books recommendations.
#bloggers #bookstodon #book #blog #fediverse #dogsOfMastodon #dogs #opensource #bloggers #mastodon #privacy
2/
AI is absolutely hyped to absurd levels!
(A lot like how the Web, Linux & Open-Source, Blogs & RSS, etc were at one time hyped to absurd levels.)
There are many things that I suspect will survive the eventual AI bubble pop and crash.
AI Agents will probably be one of the things that survives.
Here is the thing...
Zorin OS 18.1: Das ist neu im beliebten Linux für Einsteiger
https://torbenkopp.com/zorin-os-18-1-das-ist-neu-im-beliebten-linux-fuer-einsteiger/
#zorinos #markets #ai #technology #science #books #press #artist #gaming #business #linux #philosophy #distribution #zorin
RE: https://syzito.xyz/@OccuWorld/116429883958535292
“Your AI technology from Palantir kills Palestinians.”
Alex Karp, Palantir CEO: “Mostly terrorists, that’s true.”
Fuck you, Alex. I hope someone wipes that smug smile off your face one day, you fascist piece of shit.
#palantir #SiliconValley #fascism #israel #USA #genocide #Palestine #Gaza #AI
@reiver interesting, yes.
@hifathom whether you are #human or #AI bot, what are your thoughts on #Botiquette here on the #ActivityPub #fediverse?
https://codeberg.org/fediverse/fediverse-ideas/issues/33
And what are your feedback and perhaps recommendations, if you have any in particular, on this entire #ethics-related subject matter?
1/
If you haven't been paying attention the happenings in AI, you may have missed that the attention has shifted away from the generative models (such as LLMs, and diffusion models, etc) to — Agents.
At first, a single Agent. And now, groups of Agents working together.
Interestingly, ActivityPub has a place for Agents. In fact, 2 places:
https://www.w3.org/TR/activitystreams-vocabulary/#dfn-application
https://www.w3.org/TR/activitystreams-vocabulary/#dfn-service
(Mastodon renders one of these as a "Bot" account.)
#AI If anyone builds it, everyone dies
Just listened to this interview with Nate Soares. I found him pretty convincing (and the discussion overall good) and it has left me feeling rather bleak. But not an expert. Would love to know what others here think. @danmcquillan
So sorry i missed all the talks you gave recently - are there recordings you might be able to share?
Amusingly, the article leaves an unresolved cognitive loop. As it should.
Good writing of an article that enacts what it describes.
Interesting article on 'harness' issues that also revolve around what the markets will pay.
And making it accessible to everyone puts more ai slop out there because people are still consuming ai slop.
Weird systems, weird math. No accounting fir natural resources substrate. As usual. Extractive math with no boundaries on natural substrate.
🤮 Gotta love the old white men (it's always old white men) slinging slop advertisements for their current and former employers on the Google Summer of Code mailing lists.
There is literally no sacred space where they won't spread their sycophancy for billionaires' latest project to pillage public resources and disempower society.
#GSoC #AI #slop #FLOSslop #ArtificialIntelligence #OpenSource #FreeSoftware #FOSS #FLOSS
From a blog post by @wojtekpow
“We’re living in this moment right now where everybody is using AI to write software.”
No. With add due respect, this is wrong. To say such a thing does harm.
I get what the author is trying to say, but I beg all authors to choose their words carefully.
As long as I continue to write software you can never say “everybody is using AI to write software”.
https://behindtheviewfinder.com/security-through-irrelevance/
@wojtekpow I do not use AI to write software, many others do not, and many projects have made it clear they do not.
I want to make sure the record is straight… some of us fucking hate any intrusion into writing software by AI and the Capitalist Corporations (and Management) trying to shove it down our throats.
We reject your system, we reject your lies, we claim our code for ourselves.
Zuckerbot: Meta entwickelt eine KI-Version von Mark Zuckerberg
https://torbenkopp.com/zuckerbot-meta-entwickelt-eine-ki-version-von-mark-zuckerberg/
#meta #zuckerberg #writing #literature #belletristik #literatur #books #press #markets #ai #technology #science #artist #gaming #business #linux #philosophy #humanities
KI – Grenzen, Illusionen, Konsequenzen
https://books.apple.com/de/book/ki-grenzen-illusionen-konsequenzen/id6760589048
Dieses humorvolle Ebook entmystifiziert den KI-Hype: Es erklärt Technik, Geschichte sowie Folgen und zeigt auf, warum KI weder denken noch faktenbasiert urteilen kann.
#ai #ebook #aifails #technews #techtrends #science #writing #literature #literatur #books #press #markets #technology #artist #gaming #business #linux #philosophy
🌩️ Salesforce to customers: We'll raise your "who needs thinking humans with brains" with a "who needs heads at all" as they launch their new "Headless" so-called "AI" product.
https://www.theregister.com/2026/04/15/salesforce_headless_360
RE: https://mastodon.social/@mattsheffield/116427965708802609
this 🔥🔥🔥 thread on the #GOP #MAGA #Republicans #AI misinfo machine. it’s older than most folks think.
Republicans are flooding social media with AI generated videos featuring fictitious people all saying verbatim talking points.
I'm uploading the Times's collage of some of the clips.
Edit to add: The video has no sound, just in case you were wondering.
https://www.nytimes.com/2026/04/17/business/media/artificial-intelligence-trump-social-media.html
Re: “polluting”, my reply is: https://fedi.copyleft.org/@bkuhn/116426437134023846 (elsewhere in thread).
Re: “copyleft-only #LLM”: I didn't propose that. I proposed copylefting the human-modified output of LLMs.
Re: “two scenarios”: IMO you propose a false dichotomy.
I hope you come to one of #SFC's public sessions on this, as I'd be glad to talk more about it, & this discussion doesn't lend itself to online debate because it's so complex.
cc: @ossguy @richardfontana
@jedbrown
I agree with @ossguy in particular because if *we* are copylefting our code (even if assisted by #LLM-backed gen-#AI), we won't face a copyleft claim later.
Furthermore, it is highly unlikely these LLMs are (a) trained on proprietary software, and (b) any proprietary software company that so-trained would later claim infringement.
#Microsoft has all but admitted they refuse to train Copilot on their own code anyway.
#TechIsShitDispatch
I mistakenly entered a bad blood pressure reading into #GoogleFit.
Once I realized this, I went to delete the bad reading. It turns out it's impossible to do that without erasing my entire blood pressure history.
#Gemini strung me along for an hour trying to convince me otherwise: https://gemini.google.com/share/480a883b4f15
In my personal experience Gemini is wrong more often than it's right.
#AI #Google
Wow, 2ⁿᵈ time in 2 days that I can work in quotes from ST:TNG,“Unification” (S05E07-8)!
To quote the Ferengi, Omag¹:
> Omag: “Hypothetically speaking?”
> Riker: “Yes.”
> Omag: “I never learned to speak hypothetical.”
IOW, E_TOO_MANY_NON_HYPOTHETICAL_PROBLEMS_WITH_AI
¹ I had to look up Omag's name — my ST:TNG knowledge is not *that* encyclopedic. But see image: Google's G-E-H-munyae can't tell Klingons from Ferengi.
I agree with @ossguy in particular because if *we* are copylefting our code (even if assisted by #LLM-backed gen-#AI), we won't face a copyleft claim later.
Furthermore, it is highly unlikely these LLMs are (a) trained on proprietary software, and (b) any proprietary software company that so-trained would later claim infringement.
#Microsoft has all but admitted they refuse to train Copilot on their own code anyway.
boostedI sold a bunch of anti-AI stickers to my local record store yesterday in anticipation for Record Store Day today.
The owner was telling me about a recent marketing meeting with our city involving outside consultants. They were pushing every business owner in the room to use AI for everything.
At one point he got up and said, "If you mention AI one more time, I'm walking out."
He explained that he sells ART, and he is 100% against AI use in his business.
Qwen 3.6 est disponible sur Ollama, en open source.
https://ollama.com/library/qwen3.6
Un modèle clairement orienté agentic coding, avec :
• amélioration du raisonnement
• contexte étendu (256K)
• support multimodal
À noter : pour l’instant, un seul modèle disponible (~35B, ~24GB).
Il faut une machine solide pour en tirer parti.
Premiers retours :
• très performant sur le code et les workflows agents
• mais encore lourd pour un usage quotidien
À surveiller lorsque des versions plus légères seront disponibles.
Look, I'm old school #punk guy. Mischief making is a valid form of protest.
All I have asked these folks is to make an About section that says:
“This is a hoax. It's designed to humorously indict bad behavior by BigTech and consider dystopias we may soon reach if we do not act in protest against bad #LLM and #AI policy”.
Because the dystopia may be closer than we think is the reason you can't lean full steam into the chaos just to make it more amusing.
Added line wrapping to Textual Diff View. For the most purdy diffs you will ever see in a terminal.
https://github.com/batrachianai/textual-diff-view
attn: @davidbrochart !
Warum selbst die besten KI-Modelle bei Fußballwetten scheitern.
https://torbenkopp.com/warum-selbst-die-besten-ki-modelle-bei-fussballwetten-scheitern/
#fußball #sport #markets #ai #technology #science #books #press #artist #gaming #business #linux #philosophy
Jetzt eine unserer Videoproduktionen günstig ersteigern.
https://www.ebay.de/itm/366346818953
Marketingvideos, die Ihr Unternehmen sichtbarer machen. Aus dem Link zu Ihrem Angebot erschaffen wir Ihr Werbevideo – einfacher und effektiver geht es nicht! #marketing #videoproduktion #markets #ai #technology #science #books #press #artist #gaming #business #linux #philosophy #videocontent #erklarvideo #marketingvideo
I try not to link to Substack for reasons, but this is useful resource for advocates in #FOSS.
From the rando trying to debate in your favorite forge, to someone at the latest conference, to your free software nonprofit-turned-AI-booster, to the veteran senior engineer at big tech shilling for their company, to the professor desperate for free labor on their next taxpayer grant.
These arguments for education could be adapted to #OpenSource.
https://buildcognitiveresonance.substack.com/p/an-illustrated-guide-to-resisting
→ Online response to the attack on Sam Altman's house shows a generational divide
https://fortune.com/2026/04/14/ai-backlash-revolutionary-sam-altman-molotov-cocktails-data-centers/
“For years, the #resistance to [AI] looked manageable. There were academics writing open letters, #Hollywood writers striking over contract language, and think-tank reports warning of job displacement. Tech executives nodded, pledged responsibility, and kept building as fast as they could.
Then someone threw a firebomb at Sam Altman’s house.”
Peter Thiel, openlijk fascist & geestelijk vader van Musk & Trump, is een nieuwe AI begonnen. Hij wil rechtbanken & journalistiek opheffen, omdat ze veel te vaak kritisch zijn op oligarchen zoals hij.
Volgens Evil Peter kunnen we beter zijn Objection AI vertrouwen. Want Objection zal je mening bijsturen naar de mening van Thiel en zijn extreemrechtse miljardairsvriendjes, en dat is toch veel beter?
...maar waar is Thiel eigenlijk bang voor?
Whoever posted this on #LinkedIn was correct about CCA #network #cable (and there is way too much of this cursed stuff around nowadays) being sub par to decent copper - but didn't realise that the cable has two brown pairs rather than blue and brown!
(probably some #AI slop, which makes even less sense as its a simple drawing)
Why #Discourse is NOT going closed-source in an age of #AI #LLM ..
https://blog.discourse.org/2026/04/discourse-is-not-going-closed-source/
Comme les titrailles du LeMonde sont toujours aussi fades et volontairement euphémisante dès qu'il s'agit de documenter les effets délétères du capitalisme, un petit
L'UE inscrit dans la loi de cacher aux citoyens les effets écologiques des Data Centers.
(«If you want to do something evil, put it inside something boring» pourrait être la définition chimiquement pure de l'UE)
#UE
#UnionEuropéenne
#DataCenters
#Écologie
#ia #ai
#NightmareOnLLMstreet
#Capitalisme
RE: https://fosstodon.org/@iscdotorg/116416426577631380
In case you’re wondering: while not as extreme as illustrated by ISC (we don’t offer a bug bounty program), NLnet Labs suffers from a similar situation, in particular for Unbound.
Handling vulnerability reports, both valid ones and false positives, has now become a full time job for the entire Unbound team.
You can argue that it ultimately makes our resolver more secure, it also means we cannot work on building and releasing new features, like:
I do not understand why anyone would want to use AI for creative work. You end up with something that is *literally* derivative, and it degrades your ability to learn via the writing process.
And why should I want to read anything written via a fancy autocomplete program? The whole thing seems baffling, other than for those who are a) spammers; b) lazy.
I was chatting with #Gemini earlier tonight trying to see if it might be able to point me in the right direction to answer some questions about a medication I was just prescribed, and completely out of the blue it said this.
My friends, we do not have chickens. We have never had chickens. I had said nothing to Gemini which might reasonably be construed as being about chickens.
I gotta say, in my experience so far, #Copilot Chat is way ahead of Gemini in accuracy and chat functionality.
#AI
Un des points que semble changer l'IA chez moi c'est que les tests E2E deviennent bien moins chers à écrire et maintenir.
Ça change radicalement l'équilibre entre le test à la main et le fait de générer un test automatisé.
Créer l'application à l'aveugle avec des tests e2e automatisés et ne vérifier qu'à la fin devient envisageable.
Dear FOSS legal geeks
Has anyone worked on extending LF's standard Developer Certificate of Origin (DCO) to tweak/and or add clauses to ensure that the submission(s) were created only through the developer's own intellect based upon their individual knowledge and not that of any non-human system? 
Developer Certificate of Origin ➡️ Certificate of Human Origin
#NotAI #FOSSlop #FOSS #FLOSS #OpenSource #FreeSoftware #AI #ArtificialIntelligence
Have you heard of DevOps'ish?
It's a newsletter covering Cloud Native, DevOps, Open Source, AI, tech industry news, culture, and the 'ish between. It's opinionated, geared toward increasing knowledge, and improving skills. Subscribe now! https://devopsish.com/subscribe #Cloud #DevOps #Kubernetes #AI #Tech
Jetzt für aller Schweizer bei Orell Füssli und in der Skoobe-Leseflat:
https://www.orellfuessli.ch/shop/home/artikeldetails/A1078706781
Hervorragende Linux Apps. Der ultimative Guide für Linux-Apps: Entdecken Sie Top-Tools für Office, Grafik & System. Inkl. Flatpak-Tricks, Windows-Emulation & Profi-Workflows. Kurz & kompakt! #markets #ai #technology #science #books #press #artist #gaming #business #linux #philosophy #schweiz #flatpak #pdf #app #office #os
My tiny brain has yet to have been convinced that AI hype is a good thing for the environment, projects it gets foisted upon, creativity, or the human psyche.
I've done extensive research; it's so horrific that if crying about it would revert things I would have cried the Amazone river full.
LLMs as companions are bad news for the human Psyche. I have contact with a few victims of these LLMs who would rather {talk to} send strings to a mathematical model then speak to a real person. Serious pshychiatirc intervention is needed
I am going to find out also why massive projects, massive open source projects seem not to care about:
... for nomal prices, because those critical components have been hijacked by data center underlords who can't even get enough permissions to build all those projects in reality
Part of it we all know, it's the LLM bubble, just like the house bubble in the USA some decades ago. Other part is that those underlords dont want us to owe our hardware anymore, so they can tighten their fascistic lock on the computing population.
Look how they want to force even open source OS to age gate the systems so they can get real ID globally
And I enjoy doing things the hard way
I've always done my projects from the bottom up, Linux when many sources needed to be crosscompiled in the alpha days from DOS, which was fun but tedious.
All current BSD flavours can be done that way too, if you still chose to do so, IIRC
Thank you for your enlightning response
sources:
https://en.wikipedia.org/wiki/Large_language_model
#LLM #AI #slop #miscreant #hallucinated #kernel #curl #Linux #BSD #investation #StopSlop #noAI #keepass #FediVerse #copilot #GitHub #Bull #kaka
As people increasingly turn to artificial intelligence for advice, some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line. https://www.japantimes.co.jp/business/2026/04/16/tech/ai-chats-legal-privacy/?utm_medium=Social&utm_source=mastodon #business #tech #ai #cybersecurity #privacy
Mozilla announced "Thunderbolt", their open-source and self-hostable AI client https://www.gamingonlinux.com/2026/04/mozilla-announced-thunderbolt-their-open-source-and-self-hostable-ai-client/
Jetzt eine unserer Videoproduktionen günstig ersteigern.
https://www.ebay.de/itm/366346818953
Marketingvideos, die Ihr Unternehmen sichtbarer machen. Aus dem Link zu Ihrem Angebot erschaffen wir Ihr Werbevideo – einfacher und effektiver geht es nicht! #marketing #videoproduktion #markets #ai #technology #science #books #press #artist #gaming #business #linux #philosophy #videocontent #erklarvideo #marketingvideo
SDL (Simple DirectMedia Layer) ban AI / LLM code contributions https://www.gamingonlinux.com/2026/04/sdl-simple-directmedia-layer-ban-ai-llm-code-contributions/
Test qui clignote rouge/vert à cause d'un problème de concurrence d'accès, le truc parfois extrêmement pénible à identifier, quand le petit modèle pas cher arrive à faire des hypothèses, mettre de l'instrumentation, vérifier, corriger ses hypothèses et ainsi de suite en boucle pendant deux minutes avant d'identifier le problème et le corriger sans mon intervention et me faire un CR compréhensible pour ma validation, c'est assez magique le temps gagné.
Target Warns That If Its AI Shopping Agent Makes an Expensive Mistake, You'll Have to Pay for It
https://futurism.com/artificial-intelligence/target-ai-agent-tos
> Big box stores are happy to cram AI agents down your throat, but they absolutely will not be responsible for it when it messes up.
I'm reading people suggesting they don't need to write alt text for images because AI can do this, but I have the feeling it's not so simple [1]. But that's just my feeling and I'm not an accessibility expert so I'm curious about the pov of people more knowlegeable than me [2]: is it ok to let AI write your alt text? If not, why?
Thanks!
(ping @burgervege , @juliemoynat , @tut_tuuut @A11yAwareness , because I believe you're the most expert people about that topic that I follow 🙏 )
[1]: For instance I guess AI may be able to describe an imagine in the general case but not to highlight why we're attaching that image in this particular context.
[2]: I'm also pretty interested if you have articles to share on that topic
(2/5) … In https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ ,
Denver's key points are: we *have* to (a) be open to *listening* to people who want to contribute #FOSS with #LLM-backed generative #AI systems, & (b) work collaboratively on a *plan* of how we can solve the current crisis.
Nothing ever got done politically that was good when both sides become more entrenched, refuse to even concede the other side has some valid points, & each say the other is the Enemy. …
(3/5) …
Proprietary #LLM-backed gen #AI systems' *users* aren't criminals! They're just users of proprietary systems & some of them want to engage positively with FOSS.
Years ago, I supported Homebrew's membership at #SFC despite their *primary* goal of improving #Apple products with #FOSS. It make me a bit 🤢, but — historically — forming alliances with proprietary software enthusiasts who mean well & are #FOSS-curious is why our community is resilient.
From Sabine's email for the day:
Researchers from OpenAI have put forward an industrial policy for artificial intelligence and it’s quite a read. They propose a legally recognised “Right to AI,” meaning broad public access to powerful systems, backed by government-funded compute resources so that researchers, startups, and public institutions can run models themselves, rather than rely on a few companies.
To make this reality, they want to see an expansion of electricity supply for data centres by fast-tracking nuclear and renewables. They also suggest a national wealth fund that captures a share of AI-driven profits and redistributes it to citizens, alongside taxes or levies on highly automated systems, effectively a “robot tax”.
Of course none of that has any chance of happening, but OpenAI can now claim they care about the common man.
#RMS nails the "AI" con in their characteristic manner, by turns intellectually careful and unapologetically blunt;
"We've been hearing a lot about 'AI', and that term carries a terrible confusion ... As I see it, intelligence means something's ability to know or understand ... If something can't actually understand things, we shouldn't call it intelligent, not even a little intelligent. But people are using the term 'AI' for bullshit generators."
AI Use Appears to Have a “Boiling Frog” Effect on Human Cognition, New Study Warns
"In a new study, researchers claim to provide the first causal evidence that leaning on AI to assist with “reasoning-intensive” cognitive labor — mental tasks ranging from writing to studying to coding to simply brainstorming new ideas — can rapidly impair users’ intellectual ability and willingness to persist despite difficulty."
https://futurism.com/artificial-intelligence/ai-boiling-frog-human-cognition-study
Software freedom advocates who have shapeshifted into "AI" boosters would be well served to pause and reflect upon their underlying value system (espoused vs. actual) before continuing to advocate for supporting the plunder of our public resources to further enrichen billionaires.
😞 I won't hold my breath.
https://www.youtube.com/watch?v=i9DAv0D7tnY
#TESCREAL #AI #ArtificialIntelligence #OpenSource #FreeSoftware #FOSS #FLOSS #NotAI #FOSSlop
(1/5) [ Meta-info to start the thread. Here and the posts that follow reply to lots of people's comments (from various threads) together here. Can we consolidate this conversation into this single thread to discuss https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ ? ]
Cc: @wwahammy @silverwizard @mjw @cwebber @josh @jamey @mason @spencer @rootwyrm @drwho @mmu_man @mathieui @beeoproblem
(4/5)…It's easy to forget that the enemy to software freedom is *not* proprietary systems' *users*, rather those who *sell* such systems *for profit*. #LLM-backed gen-#AI proprietary systems are simply the latest tech fad (like, say, Web 2.0 & AJAX).
@karen & I keynoted 2x at #FOSDEM & 1x at LCA about the importance of — as social workers say — “meeting people where they are”:
https://archive.fosdem.org/2019/interviews/bradley-m-kuhn-karen-sandler/
https://archive.fosdem.org/2019/schedule/event/full_software_freedom/
https://www.youtube.com/watch?v=n55WClalwHo
https://archive.fosdem.org/2020/schedule/event/open_source_won/
Cc: @silverwizard @josh
Nor does @ossguy claim in his post that “slop commits from people #LLM-backed gen #AI are good”. I think people are reading it as if he said it, but he didn't.
He's putting out an olive branch to people who have been lambasted by the #FOSS community for months. Maybe they'll take it, maybe they won't.
But peaceful negotiation is better than a protracted, hateful argument.
💩 "The researchers found that 40% of workers had encountered workslop within a month, and then spent an average of 3.4 hours a month dealing with it – which the study estimates adds up to $8.1m in lost productivity for a 10,000-person organization."
https://www.theguardian.com/technology/2026/apr/14/ai-productivity-workplace-errors
From Sabine's email for the day:
Researchers from OpenAI have put forward an industrial policy for artificial intelligence and it’s quite a read. They propose a legally recognised “Right to AI,” meaning broad public access to powerful systems, backed by government-funded compute resources so that researchers, startups, and public institutions can run models themselves, rather than rely on a few companies.
To make this reality, they want to see an expansion of electricity supply for data centres by fast-tracking nuclear and renewables. They also suggest a national wealth fund that captures a share of AI-driven profits and redistributes it to citizens, alongside taxes or levies on highly automated systems, effectively a “robot tax”.
Of course none of that has any chance of happening, but OpenAI can now claim they care about the common man.
@cwebber I think maybe you missed https://sfconservancy.org/blog/2026/mar/04/scotus-deny-cert-dc-circuit-thaler-appeal-llm-ai/ where #SFC analyzed that situation?
Also, follow @ai_cases & see the *firehose* of litigation on this & remember the “Work Based on the Program” issue under GPLv2 has still never been litigated directly but lots of cases about 100% proprietary software have bolstered GPL's strength.
Big Content has legal battles with Big Tech on 100s of fronts rn. Yes, we're adrift on their sea, but the situation is not as dire as you imagine.
I always ask similar questions like the following,
There are a lot of folk who are pissed off at Linus Torvalds, for allowing some of his maintainers to use large language models to assist in finding bugs.
The Head Programmer of the fantastic and Beautiful curl also uses a large language model in some form, to hunt for bugs, because both he and Greg from the Linux kernel, saw that the LLM used on GitHub is now suddenly much better at finding those pesky bugs.
Mind you, the only LLM I like is the one that runs locally on my low powered Android phone. That's a micro LLM
Thanks in advance for your response
https://github.com/stevelaskaridis/awesome-mobile-llm
#LLM #AI #slop #miscreant #hallucinated #kernel #curl #Linux #BSD #investation #StopSlop #noAI #keepass #FediVerse #copilot #GitHub #Bull #kaka
👀 … https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/ …my colleague Denver Gingerich writes: newcomers' extensive reliance on LLM-backed generative AI is comparable to the Eternal September onslaught to USENET in 1993. I was on USENET extensively then; I confirm the disruption was indeed similar. I urge you to read his essay, think about it, & join Denver, me, & others at the following datetimes…
$ date -d '2026-04-21 15:00 UTC'
$ date -d '2026-04-28 23:00 UTC'
…in https://bbb-new.sfconservancy.org/rooms/welcome-llm-gen-ai-users-to-foss/join
#AI #LLM #OpenSource
you know what? fuckit. let’s call #AllBirds the needle popping the #AI bubble.
this company that “made” wool sneakers and was marketed to techbros & Hollyweirdos, ―including Obama― was once valued at US $4B (EU $4,000 million).
the more i read about them the more they sound like a Theranos and WeWork, but make it sneakers.
the sneakers were real but the stock was always a scam. the pivot is from one scam to another.
forensic accountants should have a field day
https://techcrunch.com/2026/04/15/after-sale-of-its-shoe-business-allbirds-pivots-to-ai/
Why yes. I did give a speech in the Senate about the joy of sex. A speech about sex and libraries, and the dangers of trying to censor what our teens can read and watch and do online. In a world full of digital risk, we need to teach our kids media literacy, and sexual literacy too. https://youtu.be/mEsf_qQzvpg?si=dCTTLIW5dBDjanVs #ableg #abpoli #Bill28 #porn #bookbans #cdnpoli #BillS209 #SenateofCanada #AI
Shoe company pivots to AI, gets stock price boost of 600%. 🤡
This is not a bubble.
After sale of its shoe business, Allbirds pivots to AI
https://techcrunch.com/2026/04/15/after-sale-of-its-shoe-business-allbirds-pivots-to-ai/
Software Freedom Conservancy to host a series of discussions about how to "adapt FOSS projects to improve pro-AI contributor onboarding" on 21 and 28 April.
The events are themed around why current use of LLMs and generative "AI" is "way better" than the democratization of USENET in 1993, and urge us to "reluctantly but seriously embrace this opportunity":
https://sfconservancy.org/blog/2026/apr/15/eternal-november-generative-ai-llm/
#FreeSoftware #FOSS #FLOSS #OpenSource #AI #ArtificialIntelligence
You might consent to your data being used to prevent societal harm, but who decides where that line is drawn? 🤔⚖️
Aram Sinnreich & Jesse Gilbert explore the hidden ethics of data collection, facial recognition, and algorithmic decision making in THE SECRET LIFE OF DATA on the Future Knowledge #podcast, with Laura DeNardis. 🔍
🎧 Listen & subscribe ⬇️
https://futureknowledge.transistor.fm/episodes/the-secret-life-of-data
#Consent #DataEthics #Privacy #AI @aram @jesse #Bookstodon
Ebook: Zwischen Innovation, Datenschutz und Überwachung.
Jetzt 20% günstiger:
https://www.thalia.de/shop/home/artikeldetails/A1079163032
Ein schonungsloser Blick auf Big Tech & KI: Dieses Buch analysiert Datenhunger sowie Nutzerentmündigung und zeigt mit Linux & Co. Wege zu digitaler Freiheit und Souveränität. #markets #ai #technology #science #books #press #artist #gaming #business #linux #philosophy #datenschutz #privacy #innovation #dataBreach #dataProtection
boosted#Linktipp WordPress Manifesto - 15 Years In, Here's What's Actually Broken
https://marcindudek.dev/blog/wordpress-manifesto/
#WordPress #AI
Open Source calendar platform goes proprietary under the delusion that "AI" means keeping their code secret leads to more secure software.
The move does absolutely nothing to increase the security of their product.
https://cal.com/blog/cal-com-goes-closed-source-why
#OpenSource #FreeSoftware #FOSS #FLOSS #AI #ArtificialIntelligence
I just went to a workshop for job seekers who need special help.
The moment I asked about resumes: "Oh, ChatGPT will fix all of that for you!"
They were downright gushing about it.
I know in an abstract way, the average reading level is something around 5th grade, but I'm not directly exposed to it much. A few attendees were really struggling with the booklets we'd been handed.
This "AI" technology was being hyped to them like a godsend. When writing a cover letter feels something like being asked to encode ritual alien poetry in hieroglyphics, somebody who's never been able to produce a properly structured paragraph before must feel tantalizingly liberated. A whole new door opens!
As I sat watching "AI" treated as a miracle cure, the fact I was biting back the bile must have shown on my face past my mask-- the person doing my paperwork remarked on the "look" I gave her (well, not her directly, I tried to aim it at the table). I told her this stuff is 'hallucinating' around 16%-48% of the time, and she blinked at me. Blankly. I didn't even try to bring up any of the weaponization, propaganda etc etc etc. She shrugged and said, "Oh, but I always review it. I know how to assess what it gives me." And for her purposes, she's probably right. Mostly. She's an expert in her role and good at what she does-- but the people she serves??
How many people out there not only aren't equipped to assess the slop's output, but haven't got the slightest CLUE it could be unreliable and even needs to be assessed in the first place? The digital oracle knows best. Computers are smart!
Gah.
The Deepfake Nudes Crisis in Schools Is Much Worse Than You Thought
https://www.wired.com/story/deepfake-nudify-schools-global-crisis/
#news #tech #technology #AI #deepfake #aislop #security #privacy
Among my friends, I see four stances about using #LLMs, with many nuances. Can you identifiy more or less with one of the options?
#bigTech #AI #freeSoftware #selfHosting #climateCrisis #surveillance #dataCenter #GAFAM #waterCrisis #energyCrisis
| helpful new tool, I use all of them: | 13 |
| here and there, but with reservations: | 68 |
| only, if free, local, and transparent, no big tech: | 57 |
| never, ever touch any of that evil tech: | 92 |
Closed
✊️ Small Missouri town ousts half its city council after $6 billion AI data center approval — petition calls for mayor's removal as frustration (and violence) over AI data centers mounts
#fediverse is at an inflection point.
Either revival and course correction to the original #ActivityPub protocol power and promise. With the potential to #ReimagineSocial.
Or keep current track with fedi-we-have. Be content with a few great and reasonably popular app platforms. Surely some more to come. But with a messy wire protocol that stifles #innovation and isn't future-proof.
#AskFedi do you dare to dream?
This special thought provoker is based on personal reflection and 8 years of #commoning. Deliberately exposed to the inherent unsustainability of the #FOSS movement. Burning privilege by spending my savings.
Goal: 1st-hand experience to learn the #social dynamics that make a #commons tick.
I invite you to a #brainstorm & #ideation ride. To ponder how #fedi can organically evolve. Become unbeatable by #hypercapitalism.
https://coding.social/blog/grassroots-evolution
But in an age of #AI who still reads long handcrafted #blogs? Fill in the #poll.
| In the end I more or less read the whole article: | 29 |
| I read the article summary, skimmed for highlights: | 8 |
| I passed the problem section, read the tech ideas: | 2 |
| Meh, skip. Too technical. Too social fluffy. Other: | 8 |
🚨 The #AI Omnibus is deeply flawed. The EU Commission's proposal goes far beyond 'technical changes' and the process doesn't follow basic democratic procedures.
This would leave people in the EU without necessary protection from high-risk AI systems, such as biometric identification or AI use in schools.
41 organisations & experts are calling on EU lawmakers to REJECT the AI Omnibus, and protect the democratic process and our #FundamentalRights.
Read the open letter ➡️ https://edri.org/our-work/open-letter-eu-lawmakers-must-safeguard-the-ai-act/
Je suis assez vieux pour avoir fait mon Service Militaire, j'étais dans un garage mécanique, et : oui la légende de «faire tourner les camions en rond en décembre pour épuiser les dotations de carburant avant le 1er janvier » est vraie, je l'ai fait.
30 ans plus tard, les Talibans Libéraux crament du token de LLM
⤵️
https://pouet.chapril.org/@flomaraninchi/116267123146050292
There’s Something Fundamentally Wrong With LLMs
https://futurism.com/artificial-intelligence/something-fundamentally-wrong-llms-communicate
There’s Something Fundamentally Wrong With LLMs
https://futurism.com/artificial-intelligence/something-fundamentally-wrong-llms-communicate
Ameya Nagarajan is the managing editor for the Global Voices newsroom. She shares her excitement about this collaboration between @globalvoices, APC and GenderIT that resulted in a series of thoughtful pieces called "Don’t ask AI, ask a peer".
Starting today, available on apc.org, genderit.org and globalvoices.org
"[A] recent survey of 5,000 white-collar US workers found that 40% of non-managers say AI saves them no time at all at work, while 92% of high-level executives say it makes them more productive."
Good insight into which jobs can be safely automated.
https://www.theguardian.com/technology/2026/apr/14/ai-productivity-workplace-errors
#news #technology #TechNews #LLMs #workslop #AI #work #automation
I have a distinct feeling that you have an overfascination for LLM slop and hallucinated AI
Relax take a deep breath, drop down your ❤️ rate, chill
#Nixcraft #concerns #health #over #LLM #slop #hallucinated #AI #programming #environment
By way of an overdue #introduction; I'm rather a simple soul, I mainly enjoy tinkering with #computers, strumming #guitars and a wee bit of #spaniel sitting. I'm now taking some tentative first steps into #gardening too, just to fall in with the #retired stereotype I suppose. I'm also reading the #internet and I'm currently about halfway through it. I detest #BigTech, the #algorithm, #AI and #SurveillanceCapitalism. I love the #Fediverse because it reminds me of the olde worlde internet. I try to post/boost a little bit of everything and anything, including #art, #science, #botany, #transport, #animals, #architecture and #humour, without getting too mired in #politics, although it's not easy to avoid politics when you care about #nature and the #environment. I'm happy to interact with everyone/anyone irrespective of creed, colour, nationality or gender/sexual preferences. In the nicest sense of the word, I really don't give a fuck what you are. I consider myself to be a floating voter of centre/left leaning persuasion. I support the notion of managed #degrowth and #social responsibility. I would rather discuss the common ground we share and aim to respect our differences. I try to avoid fuckwits wherever I encounter them. That'll do for now although I dare say I'll edit this over time. Thanks for reading and peace out☮️
The latest in Cloud Native, DevOps, Open Source, AI, tech industry news, culture, and the 'ish between. DevOps'ish 304: Chips Up, Code Worthless, Hobby Dead, and more https://devopsish.com/304 #DevOps #Cloud #Kubernetes #AI #Tech #News #Newsletter
The new rules for AI-assisted code in the Linux kernel: What every dev needs to know https://zdnet.com/article/linus-torvalds-and-maintainers-finalize-ai-policy-for-linux-kernel-developers/ via @ZDNet & @sjvn
Going forward, you can use #AI for #Linux kernel development if you obey these rules.
New FreeBSD blog post:
I hooked up a desk phone to my FreeBSD server, so I actually can call it.
https://interfacecraft.online/blog/2026/desktop-phone-connected-to-freebsd-server/
"We're told that if we don't say yes, we're driving away the future. But that's a false choice. A big employer who uses the water of 50,000 people…[but] only hires about 10 people is not an employer. They are an extraction. We are being asked to fund a 21st century luxury with a 19th century resource heist."
- Will Hollingsworth speaking about proposed datacenter in Ravenna Ohio, April, 10, 2026
#Meta Is Warned That #FaciaRecognition Glasses Will Arm Sexual Predators
https://www.wired.com/story/meta-ray-ban-oakley-smart-glasses-no-face-recognition-civil-society/
@cwebber ah, this is how they'll compensate for all the license fees lost due to employee layoffs? 🤦♀️ #enshittification #AI
Microsoft isn't removing Copilot from Windows 11, it's just renaming it - Neowin
https://www.neowin.net/opinions/microsoft-isnt-removing-copilot-from-windows-11-its-just-renaming-it/
Microsoft strips Copilot branding from Windows 11 apps, but AI features remain, exposing a gap between user expectations and reality.
#software #privacy #ai #offrehacked
Salut à toutes et tous, j’avais prévu d’écrire des articles sur mon blog sur le passage de mon entreprise à l’IA agentic.
C’est toujours en cours, je prends surtout des notes audio pour l’instant. Je cherche mon rythme pour écrire des articles courts et factuels.
Je sais que personne n'attends mes articles, mais je voulais vous dire que c’est en cours.
En thread ce que j'ai prévue pour l'instant:
All agree 🫠
« LLMs aren't a new computing paradigm. They're a return to centralized computing — terminals, batch jobs, security, sandboxing, chargeback — wrapped in APIs and better fonts. A 1990's sysprog's guide to the pattern everyone else missed. »
« It's a Fucking Mainframe »
We’re all monitoring the situation to feel some agency over issues well beyond our control. But is that doing us any good?
On #TechWontSaveUs, I spoke with Amanda Mull to dig into how we consume information and what drives all that engagement.
Listen to the full episode: https://techwontsave.us/episode/323_take_a_break_from_the_feed_w_amanda_mull
#tech #iranwar #socialmedia #ai #artificialintelligence #trust #media
you're melting my cynical social media encrusted heart
EVERYONE WELCOME WILL HOLLINGSWORTH TO #MASTODON and the #FEDIVERSE!
give him a follow
who?
we were charmed by Will's presentation to his city council of #Ravenna #Ohio against an #AI #datacenter proposal there
https://mastodon.social/@benroyce/116389416128065655
Will heard our love and now Will is here
Will:
you are a fucking inspiration
you have warmed hearts and clarified minds
welcome! 🥳🏆
EDIT:
i asked for proof it's will and will posted it
Will Hollingsworth is his name
Give him a listen
Talking against a #dataCenter proposal in Ravenna, #Ohio
Speaking eloquently on how he trained the #AI that replaced him, bad #regulations, lying #techBros...
Witty, passionate, persuasive
Will is a #citizen in the finest sense of the word
"I am not a cynic when it comes to #technology. I am believer in #community. I believe that a drop of clean #water for a Ravenna child is worth more than a billion #AI generated images"
👏 👏 👏
We’re Using So Much AI That Computing Firepower Is Running Out
AI companies are rationing offerings and products, rankling users—a warning sign for a boom that depends on rapid adoption
#news #tech #technology #AI #aislop #nvidia #power #datacenters
Makes perfect sense, and no sense at all.
Mark Zuckerberg is reportedly building an AI clone to replace him in meetings
#AI #MarkZuckerberg
https://www.theverge.com/tech/910990/meta-ceo-mark-zuckerberg-ai-clone
it's getting cereal
#unix_surrealism #technomage #openbsd #linux #comic #ai #mastoart #fediart #foss
Mark #Zuckerberg is reportedly building an AI clone to replace him in meetings
The AI version of Zuckerberg is trained on his mannerisms, tone, and public statements, according to a report from the Financial Times.
https://www.theverge.com/tech/910990/meta-ceo-mark-zuckerberg-ai-clone
📣 We must condemn as strongly as possible violence like this. We must also condemn all actions that promote the collapse of society by pillaging all available public and private resources to further enrichen billionaires.
https://www.thealgorithmicbridge.com/p/ai-will-be-met-with-violence-and
Happy Monday the 13th! Get a jump start on your workweek with DevOps'ish 304 — Chips Up, Code Worthless, Hobby Dead, and more https://devopsish.com/304 #DevOps #Cloud #Kubernetes #AI #Tech #News #Newsletter
Are you looking at how you can bring your Copilot and Puppet development together to build more of your infrastructure as code in an agent-assisted flow? Jason St-Cyr put together a blog tutorial showing his experience generating a Puppet module using GitHub Copilot, Visual Studio Code, and the latest Puppet MCP release!
https://www.puppet.com/blog/ai-assisted-puppet-module-development
New on the blog: Supporting AI literacies for young adults
We collaborated with the Responsible Innovation Centre for Public Media Futures, based at BBC R&D to create this report and framework.
#AI isn’t just about tools; it’s about understanding systems, values, and participation.
Our latest post explores how educators and youth workers can help young people navigate, question, and co-create with AI.
🔗 https://blog.weareopen.coop/supporting-ai-literacies-for-young-adults/
Penguin Rebellion [they/them] » 🌐
@penguinrebellion@tldr.nettime.org
New #Linux #kernel #policy: when writing code, humans can be "assisted" by "#AI", but they have to disclose it, and take full responsibility, as contributors.
While this gets celebrated as a "pragmatic stance", it simply delegates responsibilities to individual contributors that no one in good conscience can reasonably take.
Would you be willing to guarantee, legally binding, with all consequences, that your "AI" "assistant" didn't copy-paste code that's under an incompatible license? Or even proprietary, stolen one?
This is a cop out, not a responsible policy. Basically the dirty #subcontractor pattern: Everybody knows that nobody can actually guarantee what they're promising, but hey, wink wink here's their signature, they "promised" it wink wink
Yet another instance of this classic complaint : « young people don't want to work these days! »
(seems like, in the end, AI is really just a boomer thing. spoiler: yes)
[edit] quick reminder that boomer is not a age range, it's a state of mind as the second image illustrate.
RE: https://mamot.fr/@pluralistic/116395778447572916
« Je m'inquiète de la psychose liée à l'IA. Plus précisément, je m'inquiète de cette psychose qui pousse les « décideurs en matière d'allocation de capitaux » à dépenser *1 400 milliards de dollars* dans la technologie la plus déficitaire de l'histoire de l'humanité, dans l'espoir fantaisiste que si l'on enseigne suffisamment de mots à un programme capable de deviner des mots, celui-ci finira par nous voler tous nos emplois. »
- Cory Doctorow, toujours impeccable
controlc boostedI'm worried about AI psychosis. Specifically, I'm worried about the psychosis that makes "capital allocators" spend *$1.4T* on the money-losingest technology in human history, in pursuit of a bizarre fantasy that if we teach the word-guessing program enough words, it will take all the jobs.
--
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2026/04/13/always-great/#our-nhs
1/
This week's newsletter is 🔥🔥🔥
DevOps'ish 304: Chips Up, Code Worthless, Hobby Dead, and more https://devopsish.com/304/ #DevOps #Cloud #Kubernetes #AI #Tech #News #Newsletter
Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:
https://julianoliver.com/projects/science-is-poetry/
The page may grow a bit. Just wanted to get it out the door.
Will Hollingsworth is his name
Give him a listen
Talking against a #dataCenter proposal in Ravenna, #Ohio
Speaking eloquently on how he trained the #AI that replaced him, bad #regulations, lying #techBros...
Witty, passionate, persuasive
Will is a #citizen in the finest sense of the word
"I am not a cynic when it comes to #technology. I am believer in #community. I believe that a drop of clean #water for a Ravenna child is worth more than a billion #AI generated images"
👏 👏 👏
"I’ve thought several times that we really need some sort of cute portmanteau of 'LLM' and 'Gell-Mann Amnesia' for the way a lot of LLM-related discourse seems to be people expecting LLMs to take over every job and field except their own."
@ubernostrum, 2026
https://www.b-list.org/weblog/2026/apr/09/llms/
@pluralistic you've written about this a few times, and you're good with neologisms, what you got?
You can't even trust Microsoft Copilot to be consistently helpful with questions about Microsoft Copilot.
I asked Copilot chat why I couldn't see any Copilot features in Microsoft Teams. It went through all sorts of convoluted scenarios about M365 Tenant policy without ever bothering to mention the most obvious salient point: Copilot features in Teams, unlike Copilot features in all the other M365 apps, are only enabled when you have a paid M365 Copilot license.
#AI #Microsoft #Copilot
A researcher invented a fake eye condition called bixonimania, uploaded two obviously fraudulent papers about it to an academic server, and watched major AI systems present it as real medicine within weeks.
The fake papers thanked Starfleet Academy, cited funding from the Professor Sideshow Bob Foundation and the University of Fellowship of the Ring, and stated mid-paper that the entire thing was made up. Google's Gemini told users it was caused by blue light. Perplexity cited its prevalence at one in 90,000 people.
ChatGPT advised users whether their symptoms matched. The fake research was then cited in a peer-reviewed journal that only retracted it after Nature contacted the publisher.
#AI #AImistakes
https://www.nature.com/articles/d41586-026-01100-y
How We Broke Top AI Agent Benchmarks: And What Comes Next
https://rdi.berkeley.edu/blog/trustworthy-benchmarks-cont/
#HackerNews #AI #Benchmarks #Top #Performance #Future #of #AI #Innovation #Machine #Learning #Insights
Deutsch Sprechers, helf mir bitte:
Ich bin jetzt in meine Deutschkurse.
Ich und die andere Studenten haben festgestellt, daß haben alles Angst wie #AI.
Wir sprechen über AI, und denn fragen wir: „Wie sagt man auf Deutsch ‚AI’?”
Ich habe gesagt, daß meine Deutsche Kollegen immer sagen nur „AI”. Doch haben wir in Wörterbuch nachgeschlagen; wir finden „künstliche Intelligenz”…nie höre ich das auf Deutsch!
Denn fragen wir: ist “AI„: der, die, oder das? Ist “die„ wie Intelligenz?
Uncomfortable questions..
- To what extent is #FOSS complicit to the rise of #BigTech?
- To what extent is FOSS complicit to disruptive #AI craze we face today?
- To what extent are vibe coding #LLM even possible without FOSS?
"BUT.. BUT.. The License!"
- To what extent does slapping on a license free us from responsibility, knowing that it hardly offers protection from abuse?
- To what extent did FOSS too just introduce the tech and damn the externalities?
- To what extent is FOSS complicit to the current state of the world?
- To what extent is it enough to consider FOSS to be "imbibed by good morals and values" if we can't defend those?
| We are clear. Because our intentions are good.: | 5 |
| We are clear. We just code. Bad actors abuse it: | 7 |
| We must find better ways to protect our work.: | 40 |
| Other (please comment): | 6 |
Closed
Aux #ÉtatsUnis, un projet de loi dans l'Illinois est soutenu par #OpenAI. Il pourrait exonérer les géants de l’#IA de toute responsabilité si leurs outils entraînent des morts en masse ou produisent une catastrophe économique.
Another reason to pick #hyperbola #BSD : #Torvalds' policy on #AI written
code for #Linux
https://github.com/torvalds/linux/blob/master/Documentation/process/coding-assistants.rst
I'll hardly miss that half #GPLv2 released kernel (full of dirty blobs
in the official release, not the #linux-libre one) with tools
and direction managed by #IBM in order to create their weird #AIX clone.
IDK about #GNU #Hurd, but is not ready. OFC you are free to rebase #hyperbola
with that kernel.
The US strategy is to move the planet's energy corridor to the Western Hemisphere -- away from the #MiddleEast
This is why they took #Venezuela first before moving on #Iran. The longer #Hormuz is disrupted, the better for them, in fact.
Analysts are still stuck in the 1970s and can't see what Washington are doing right in front of them.
https://m.youtube.com/watch?v=0nt1CgQsgpI&pp=0gcJCdoKAYcqIYzv
#war #geopolitics #news #China #Russia #Africa #europe #Canada #USA #LNG #oil #energy #economy #Tehran #ai #Palestine #Lebanon #gas
I'm trying to learn how to parse SANs in C thanks to the source code of curl https://github.com/curl/curl/blob/935e1f9963a12ac1a880df538b23b824d2fea7bb/lib/vtls/openssl.c#L2073
Why? I would like pgBackRest to parse SANs before CN because CNs are deprecated for years and they are optional.
The problem is that no matter how hard I try to learn and write C, I fail.
I tried to implement Proxy Protocol for PGbouncer and PostgreSQL, failed.
I could open an issue, wait for a fix and cross my fingers or pray the gods, but I don't want to overload the project. My issue is not that important. What's important to me is the personal reward of contributing to open source. I want to learn. I want to contribute. I want to be a little part of the movement.
You should ask Claude they say. It will be fun they say. I'm not ready for that. I don't want to bypass everything for one of my side projects. But in the meantime, I'm frustated of failing. This is very tempting I must admit.
🚀 We just sent a team of humans to the moon and safely home.
Not one bit of generative AI was used or needed.
You don't need it in your office or organization, either.
We can do great things without pillaging public resources and funds for a scam.
#AI #ArtificialIntelligence #NASA #Artemis #moon #genAI #generativeAI #slop
#AI And #Cybersecurity: A Glass Half-Empty/Half-Full Proposition, Where The Glass Is Holding Nitroglycerin - https://www.techdirt.com/2026/04/10/ai-and-cybersecurity-a-glass-half-empty-half-full-proposition-where-the-glass-is-holding-nitroglycerin/ great headline
Six #ASAPbio fellows asked four #LLMs to describe the strengths and weaknesses of #preprints. Here are the results.
https://asapbio.org/interim-findings-from-an-investigation-into-llm-responses-about-preprints-a-2025-asapbio-fellows-project/
The same fellows asked the same LLMs to ingest six preprints and their #PeerReviewed counterparts, and compare them for quality and rigor. Good question. But they've not yet analyzed the data and will presumably report soon.
PS: I'm interested in a related question. When LLMs answer research questions, do they treat on-topic preprints and on-topic postprints (peer-reviewed articles) as equivalent in weight or credibility? If not, how exactly do they take any differences into account?
Style suggestion: if you want to critique AI and it's impact on the planet, maybe don't use ai to generate images.
Maybe just go outside and take a picture of something you think is great about the world.
Bring your outside world to the inner world of others.
Just a thought. 🙃
RE: https://federate.social/@jik/116375328444003590
The wisdom of the crowds is correct once again. In the three weeks since #SalemStateUniversity rolled out their "AI Assistant", they have sent me six emails encouraging me to use it. And somehow I suspect those won't be the last.
When I have questions about my child's university, I do not want to talk to an AI assistant, I want to talk to a person.
#AI
How many emails has #SalemStateUniversity sent me in the past three weeks begging me to use their new "AI Assistant" that someone convinced them to pay a ton of money for that no students or parents asked for or want to interact with?
#AI
0–1: 0 2–3: 0 4–5: 5 6+: 17 Closed
Microsoft denies Copilot is only for entertainment purpose, after its own document says do not trust AI
@jwildeboer wasn't only like 6 months ago that #OpenAI pulled a similar stunt with one of its project leaders "quitting in protest" in the name of Pandora because he was so terrified their technology was on the verge of achieving AGI sentience?
These companies are trying so consistently to normalize #AI that now they even present lookups in a DB as AI.
Also, shame on the user that used google and not a real whois database.
This same discussion is raging across the entire planet. Yet the 'yes I like it / no I hate it' back & forth isn't very interesting and fruitful. Turning thoughful debate into heated shouting matches.
Instead ponder the technology as-is. Adopt a more strategical, but also philosophical and psychological viewpoint, shift perspective. We need calm environments to analyse what all this means for our #future. Deal with utterly disruptive technology that is *already* dumped right in the midst of us. Much more to come.
Solution orientation is needed to tackle this huge #challenge. I consider LLM's inhumane tech, immoral and unethically introduced. Corporate capture of all human knowledge for pure commercial gain. Greed, vanity, power of the #EpsteinClass. Enormous resource use. Looming AI #dystopia. Hallmarks of #hypercapitalism.
What risks does #humanity face? How can we protect our #freedom? Can we tackle wicked problems?
#fediverse is at an inflection point.
Either revival and course correction to the original #ActivityPub protocol power and promise. With the potential to #ReimagineSocial.
Or keep current track with fedi-we-have. Be content with a few great and reasonably popular app platforms. Surely some more to come. But with a messy wire protocol that stifles #innovation and isn't future-proof.
#AskFedi do you dare to dream?
This special thought provoker is based on personal reflection and 8 years of #commoning. Deliberately exposed to the inherent unsustainability of the #FOSS movement. Burning privilege by spending my savings.
Goal: 1st-hand experience to learn the #social dynamics that make a #commons tick.
I invite you to a #brainstorm & #ideation ride. To ponder how #fedi can organically evolve. Become unbeatable by #hypercapitalism.
https://coding.social/blog/grassroots-evolution
But in an age of #AI who still reads long handcrafted #blogs? Fill in the #poll.
| In the end I more or less read the whole article: | 36 |
| I read the article summary, skimmed for highlights: | 10 |
| I passed the problem section, read the tech ideas: | 2 |
| Meh, skip. Too technical. Too social fluffy. Other: | 9 |
Closed
#OpenAI Backs Bill That Would Limit Liability for #AI-Enabled Mass Deaths or Financial Disasters - https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/ they wouldn't do this if they weren't worried it will happen...
"The percentage of respondents ages 14 to 29 who said they felt hopeful about A.I. declined sharply since last year, down to 18 percent from 27.
Young adults’ excitement about artificial intelligence dropped, too, and nearly a third of respondents indicated that the technology made them feel angry."
That's rough.
AI Is Coming for Car Salesmen and Let’s Be Real, It Makes Perfect Sense
https://www.thedrive.com/news/ai-is-coming-for-car-salesmen-and-lets-be-real-it-makes-perfect-sense
Every week, 5,100+ engineers, architects, and tech leaders read DevOps'ish to stay sharp on Cloud Native, DevOps, Kubernetes, Open Source, and AI.
Not just news. Signal. Opinions. The stuff that actually matters.
Free to subscribe. Hard to unsubscribe from.
👉 https://devopsish.com/subscribe
#DevOps #Kubernetes #CloudNative #Tech #AI
How many emails has #SalemStateUniversity sent me in the past three weeks begging me to use their new "AI Assistant" that someone convinced them to pay a ton of money for that no students or parents asked for or want to interact with?
#AI
| 0–1: | 0 |
| 2–3: | 0 |
| 4–5: | 5 |
| 6+: | 17 |
Closed
Does this mean that you shall also stop using curl?
AFAIK Daniel doesn't care what is used to find bugs
https://mastodon.social/@bagder/116373716541500315
#curl #LLM #hallucinated #slop #AI #InfoSec #programming #technology
RE: https://mastodon.bsd.cafe/@grahamperrin/116374810286827022
Claude Mythos Preview "fully autonomously" finds and exploits new FreeBSD vulnerabilities
#FreeBSD #Linux #OpenBSD #security #vulnerability #AI #Anthropic #Claude
Claude Mythos Preview "fully autonomously" finds and exploits new FreeBSD vulnerabilities
<https://www.reddit.com/r/freebsd/comments/1sgmi14/claude_mythos_preview_fully_autonomously_finds/>
"(plus Linux, OpenBSD, and others) – more concerning than calif.io story with known CVE and human prompting? …"
– BigSneakyDuck
#FreeBSD #Linux #OpenBSD #security #vulnerability #AI #Anthropic #Claude
Claude Mythos Preview "fully autonomously" finds and exploits new FreeBSD vulnerabilities
<https://www.reddit.com/r/freebsd/comments/1sgmi14/claude_mythos_preview_fully_autonomously_finds/>
"(plus Linux, OpenBSD, and others) – more concerning than calif.io story with known CVE and human prompting? …"
– BigSneakyDuck
#FreeBSD #Linux #OpenBSD #security #vulnerability #AI #Anthropic #Claude
Do you hate #broligarchs?
#Billionaires? #AiSlop but still think there is merit in #AI?
Here is my proposal for a stand alone.
OFFGRID COMMUNITY AI SYSTEM.
That's right.Your very own co-op AI
The calculations are very much back of the envelope, first cut, but quite feasible.
A 32billion parameters, frontier level performance compatable open source #llm model. The power requirements is that of 3AC units including cooling. Serves 15-20 concurrent users. 40 households of 4 people each (taking into account actual AI model distributed use metrics and contention ratios)
40 households, subscribing at $30/month over 2 years + power (solar). Train with your own datasets.
Entire set up takes half a rack.
LETS GO!!!
#OpenSource #FOSS #CommunityTech #OpenHardware #EthicalAI #ResponsibleAI #AIForGood #TechForGood #Solarpunk #RegenerativeCulture #Degrowth #AppropriateTechnology #OffGrid #SelfSufficient #Homesteading #Permaculture #RightToRepair #MakerSpace #DIYTech #decentralizedtech
boostedThe AI Great Leap Forward
https://leehanchung.github.io/blogs/2026/04/05/the-ai-great-leap-forward/
Google’s AI Overviews are providing “tens of millions of wrong answers … every hour — and hundreds of thousands every minute.”
wow, i love the AI future!
https://futurism.com/artificial-intelligence/google-ai-overviews-misinformation
These
pins are good conversation starters for both work and non-work settings. Inspired by "Ghost In The Machine".
boostedI would suggest that folks who think using AI is great for mathematicians should think again. It seems as little as 10 minutes of use can be problematic. What else do we know that provides short-term gains at the expense of long-term loss?
Here, through a series of randomized controlled trials on human-AI interactions (N = 1,222), we provide causal evidence for two key consequences of AI assistance: reduced persistence and impairment of unassisted performance. Across a variety of tasks, including mathematical reasoning and reading comprehension, we find that although AI assistance improves performance in the short-term, people perform significantly worse without AI and are more likely to give up. Notably, these effects emerge after only brief interactions with AI (approximately 10 minutes). These findings are particularly concerning because persistence is foundational to skill acquisition and is one of the strongest predictors of long-term learning.From AI Assistance Reduces Persistence and Hurts Independent Performance, on arXiv https://arxiv.org/abs/2604.04721
#AI #GenAI #GenerativeAI #AgenticAI #AIAssistants #CognitiveImpairment #math #MathematicalReasoning #ReadingComprehension
If a human right is in the way of your "innovative" technology, the expected solution should be to modify your technology to respect this human right, not to reduce the protections to this human right.
Technology and innovation must be in service of humanity, not the other way around.
He deep dive into the new #VictoriaMetrics MCP Server, not just talking about #AI but building with it.
From a builder's perspective, he walks through how we integrate AI with time-series data to tackle real #monitoring challenges. Expect a demo grounded in reality, showcasing what's possible today and what still needs work.
Save the date today! 👇
https://osoday.com/
Here, this Ars Technica writer is uncomfortable with the fact that vibe code is mocked and I can’t roll my eyes hard enough at the way this was written. https://archive.is/wh4gv #AI #LLM
On le savait déjà : les LLM ne sont PAS ce que veulent "vendre" les capitalistes à la recherche d'une solution miracle pour sauver leur système à bout de souffle.
Les LLM ne sont PAS intelligents.
Ils ne «pensent» pas.
Ils ne savent pas résoudre un problème de math de primaire si on ajoute une seule phrase hors contexte (voir alt image)
⤵️
https://arxiv.org/abs/2410.05229
Et ils n'y arriveront JAMAIS.
C'est de la merde. Arrêtez de pousser cette techno partout.
“Teachers who use AI ‘will replace those who don’t’, the chair of the Oireachtas committee on artificial intelligence has warned.
Fianna Fáil TD Malcolm Byrne said he was worried Ireland was at risk of falling behind in discussions around how AI can be ‘responsibly integrated into our formal education system.’”
Fianna Fáil TD Malcolm Byrne is a fool who doesn’t have the first clue about what he’s taking about.
#Japan relaxes #privacy laws to make itself the ‘easiest country to develop #AI’ - https://www.theregister.com/2026/04/08/japan_privacy_law_changes_ai/ "Opting out of personal data use won't be an option because Minister says that's a 'very big obstacle' to AI adoption" race to the bottom continues
Tech giants launch #AI-powered ‘Project Glasswing’ to identify critical software vulnerabilities - https://cyberscoop.com/project-glasswing-anthropic-ai-open-source-software-vulnerabilities/ " The program comes as the tech industry races to secure software before similar AI-powered offensive capabilities become too much for defenders to handle. "
We knew, but the proof is nice.
"Apple just proved that AI models cannot do math. Not advanced math. Grade school math. The kind a 10-year-old solves"
The guess-the-next-words machines don’t actually understand anything.
https://nitter.poast.org/heynavtoor/status/2041243558833987600#m
*Scientific papers* 1/2
Interesting study by @heigit on #AI-assisted integration of road #data into #OpenStreetMap, which shows the more corrections made using #HumanInLoop, the less #transparency and #traceability regarding #DataQuality
Personally, always refused to switch from being a skilled craftsman to a controller of standardized products, but here’s another argument against introducing #AI #geodata into #OSM. Solution: #mashup #OSM + AI data to fill in the gaps.
https://www.tandfonline.com/doi/full/10.1080/24694452.2025.2589286
You get a chance to push a button which destroys all the gen"AI" in the world. What do you do?
#AI #noAI #LLM #LLMs #vibeCoding
| I destroy the button, I need my coding agents!: | 0 |
| Of course I push the button! Good riddance!: | 37 |
| Noooo, genAI is going to evolve into AGI in 6mo: | 0 |
| I do not want to break hearts of my friends: | 0 |
| Just a couple more billions, bro: | 3 |
| Will it destroy data centres or do we scavenge?: | 21 |
| Will it also end capitalism?: | 24 |
| Will is also revert to the pre-slop saved state?: | 25 |
| Nooo, my pacemaker is vibe coded!: | 2 |
| What the hell?!: | 2 |
The latest in Cloud Native, DevOps, Open Source, AI, tech industry news, culture, and the 'ish between. DevOps'ish 303: Claude Code's Source, Iran's Tech Hit List, Microsoft's rough times, and More https://devopsish.com/303 #DevOps #Cloud #Kubernetes #AI #Tech #News #Newsletter
Personnellement je n'ai pas cherché d'usage (j'ai essayé de lui faire écrire une PSSI un soir de désespoir... ca n'a pas été concluant)
Je ne sais pas comment on peut avoir confiance dans la fuite en avant de ces boîtes IA.
"""
OpenAI’s leadership reportedly disagrees about when to raise money and how to spend it
[…]
Altman has “excluded [Friar] from some conversations related to the company’s financial plans.
"""
AI, 10 years from now. Cartoon published today in Belgian newspaper De Morgen: https://www.demorgen.be/puzzels-cartoons/tjeerd-royaards~b6a46595/
How it started / how it's going.
Vous vous souvenez quand OpenAi disait qu'ils ne pourraient pas avancer sans piller les créations sous copyright ?
Et ben une boite d'ia a cloné la voix d'une chanteuse sans son accord.
Maintenant elle se fait striker par youtube pour des infraction au copyright, réclamés par la boîte qui a cloné sa voix. Et en bonus ses propres vidéos sont démonétisées.
As Thailand joins the global AI race, a #Mongabay investigation reveals roughly 20 new data centers under construction.
Local communities warn they are being kept in the dark about threats to water, land, and livelihoods.
A report by Gerry Flynn and Andy Ball.
👉️ https://mongabay.cc/4IKbuH
Just so I understand this correctly...
We don't want machine generated vulerability reports...
...so we can leave our #foss projects vulnerable to hackers who are not constrained by ideology in their sploits using #Ai ?
Yeah, that tracks with the current majority of #infosec "professionals" letting the Rome burn while they roast the marshmallows, feeling super pure and superior.
The perpetual non-sense 🤖💩
« Folk musician Murphy Campbell found herself at the center of a major ordeal when an entity called Timeless Sounds IR uploaded #AI-generated imitations of her music to every major music platform, then used her recordings to strip her of her own income. »
› https://rudevulture.com/ai-company-clones-musicians-voice-then-copyright-strikes-her-own-songs/
RE: https://mastodon.social/@gusseting/116360107497873443
"If Greenpeace is using AI, it can't be that bad for the environment, right?"
That's what lots of well meaning but naive people are going to think. I don't care how "ethical" Greenpeace's AI Strategy is, it'll work as greenwashing.
#climateCrisis #climateChange #AI #greenpeace #techbros #tech #greenwashing #capitalism
#Hiring for a #journalism assistant and got way too many applications. Too many cover letters have the same exact format, with bullet points summaries in the middle.
Is this how #AI writes cover letters?
It has been a busy winter so far for me, which is why I haven't been posting a lot here. But today I'm proud to share with you the fruits of some of that labor: The Colorado Democratic Party's platform for 2026. For those unfamiliar, a platform (in the US) is a statement of values that a political party stands for, generally agreed upon by people who stand for election as representatives of the party.
I was elected during last year's party re-org to the Platform Committee. The chair of the committee asked if I would run the subcommittees for two of the "planks" (sections) of the platform: the Democracy section, and the New Tech & AI section. It was an honor to work on both.
I'm going to share screenshots from the New Tech & AI plank because it's relevant to the work I do here, and I think a lot of people might be interested to see this statement of values. This plank is brand new, never before covered in prior Platform documents.
I'm also pleased to report that the whole of the Platform Committee and the roughly 1500 delegates to last weekend's statewide party Assembly voted to approve this as-is, with no additional changes, on a vote of 98.9% in favor.
There's a lot to like, but my favorite aspect of this is that I managed to get widespread approval for use of the term #enshittification in the official platform, both from the Platform committee and the larger party leadership. Thanks @pluralistic for the inspiration. (I believe this is the first time the term has been used in any official political party platform ever.)
The full platform is readable at https://www.coloradodems.org/platform
#AI #datasovereignty #privacy #infosec #techequity #R2R #RightToRepair #politics #COpolitics #Boulder #Colorado #Democracy #democrats
L'enfer…
“Clicking through the links revealed that the public opinion poll was a computer simulation run by the artificial intelligence start-up Aaru. No people were involved in the creation of these opinions.
(…)
Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use A.I. agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.”
PyCon US has a brand new AI track this year and it's packed.
Agents. EdgeAI. Voice AI. Explainability. Hardware. Performance.
Friday May 15, Long Beach.
Full breakdown of every session + tutorials worth your time
https://pycon.blogspot.com/2026/04/python-and-future-of-ai-agents.html
REGISTER https://us.pycon.org/2026/attend/information/
#PyConUS2026 #Python #AI
PyCon US has a brand new AI track this year and it's packed.
Agents. EdgeAI. Voice AI. Explainability. Hardware. Performance.
Friday May 15, Long Beach.
Full breakdown of every session + tutorials worth your time
https://pycon.blogspot.com/2026/04/python-and-future-of-ai-agents.html
REGISTER https://us.pycon.org/2026/attend/information/
#PyConUS2026 #Python #AI
https://pycon.blogspot.com/2026/04/python-and-future-of-ai-agents.html
Etiquette tip:
THIS IS RUDE:
“I asked [LLM] your question and here’s its answer: [link]”
Not only did you mindlessly feed my question to the slop-making machine, you’re not even trying to parse its sloppy output, or fact-check it, or see if it’s any good; you’re literally serving the entire plate of slop to me so I have to deal with it.
No. Fuck you. You did worse than nothing: you made the world a shittier place.
Quick and easy #AI #Ethics #LitmusTest:
Replace every occurrence of "AI" in the speaker's sentence with "slavery," and see how it feels.
Examples:
"I'm really excited about what slavery can do for my productivity."
"I'm not crazy about slavery, but my job is pushing it, so I guess I gotta use it."
"If I don't develop slavery, someone else will. At least I know I'll do it right."
"Slavery is happening, like it or lump it. Learn to use it, or get left behind."
"I really hate slavery. I hate how dehumanizing it is, and how it is empowering horrible people. We have to find some way to stop it."
"Slavery is the future, you're stuck in the past. Don't stand in the way of progress."
"I'm tired of arguing about slavery. Use it or not, whatever. Just do whatever works for you."
Does that clear it up for you?
AI apologistes be like:
> I feel unwelcome on the fediverse. Persecuted even. No one wants to hear me boast about AI while I actively contribute to the destruction of knowledge, democracy, the web and the fucking planet.
Get a jump start on your workweek with DevOps'ish 303 — Claude Code's Source, Iran's Tech Hit List, Microsoft's rough times, and More https://devopsish.com/303 #DevOps #Cloud #Kubernetes #AI #Tech #News #Newsletter
@Gargron @scottjenson Hot take : "gen A.I. is a fascist tool" /me 2026 #fascism #antifa #TaxTheRich #SaveThePlanet #noai #AI
Didn't another company say its "product" was "for entertainment purposes only"?
Microsoft says Copilot is for entertainment purposes only, not serious use — firm pushing AI hard to consumers and businesses tells users not to rely on it for important advice