Finally, after an epic battle against Gitea CI:
The definitive answer to the question, "Is AI intelligent yet?"
Finally, after an epic battle against Gitea CI:
The definitive answer to the question, "Is AI intelligent yet?"
https://github.com/searxng/searxng/issues/2163
https://github.com/searxng/searxng/issues/2008
https://github.com/searxng/searxng/issues/2273
Folks, I need to talk about this Grok/X/ #AI thing. I will not give details. But the vile aspects of these news, added to the downfall of mainstream journalism quoting Grok as if it was a real entity... I think all of this should give people pause. Like, where are we fucking going. It's been several years now and we have not seen any glimpse of a bright future with this #GenAI shitshow. Consider this an exercise of screaming into the void 🧵
I wanted to share my opinions on #AI, specifically #genAI. Thanks for reading!
https://medium.com/@santiagorhenals/my-opinions-on-ai-us-00d3b4168e87
"There have been several incidents where interaction with a chatbot has been cited as a direct or contributing factor in a person's suicide or other fatal outcome. In some cases, legal action was taken against the companies that developed the AI involved."
CW: Death, of course.
"Turning to generative AI for your filmmaking, even to make a point about its own uselessness, is to accept defeat."
https://www.avclub.com/deepfaking-sam-altman-lets-ai-direct-the-movie-which-is-accepting-defeat
Our source said that the goal of the project is to make people aware of AI's Achilles' Heel – the ease with which models can be poisoned – and to encourage people to construct information weapons of their own.
I wish we had more people writing more sophisticated concerns about the harms of AI.
"Slop" criticism is important because I think many of us feel we are being gaslit into believing that generative AI is currently creating quality creative output, while Al (henceforth Alfred) is overwhelmingly creating mediocre quality creative work.
Every past winter, Alfred survives in a more focused and refined form. Eliza was a toy chatbot of the 1960s, but what emerged from that is expanded investment in things like Natural Language Processing and Markov chains.
Markov chains have always been powerful, but through computing applications, Markov chains led to advancing capabilities in weather prediction, financial modeling, and eventually Google PageRank and bioinformatics/biostatistics (BLOSUM/BLAST for analyzing/predicting/correlating amino acid chain similarities).
The next boom and winter led Alfred to popularize classic machine learning into practical applications in consumer products: recommendation engines and clustering algorithms that formed the core of products from the Netflix prize to Spotify, Pandora, Amazon, and eventually virtually every consumer retailer has multiple machine learning products supporting everything from suggested products to search results to their own logistics and sales and financial modeling.
Learn from the past. Prepare for the future. Al will learn how to spell strawberry, write basic documents and code more effectively, and make fewer basic mistakes. Think beyond that. What are the emerging harms that come AFTER all of that?
#AI #GenAI #GenerativeAI #generative_ai_risks #generative_ai_concerns
In which I build Unicode character tables and fail to serialize large automata, “Losing 1½ Million Lines of Go“:
https://www.tbray.org/ongoing/When/202x/2026/01/14/Unicode-Properties
(And in which I find myself sliding into the #GenAI-is-ok-for-coding camp.)
A mere week into 2026, OpenAI launched “ChatGPT Health” in the United States, asking users to upload their personal medical data and link up their health apps in exchange for the chatbot’s advice about diet, sleep, work-outs, and even personal insurance decisions.
It's nonsense to say that coding will be replaced with "good judgment". There's a presupposition behind that, a worldview, that can't possibly fly. It's sometimes called the theory-free ideal: given enough data, we don't need theory to understand the world. It surfaces in AI/LLM/programming rhetoric in the form that we don't need to code anymore because LLM's can do most of it. Programming is a form of theory-building (and understanding), while LLMs are vast fuzzy data store and retrieval systems, so the theory-free ideal dictates the latter can/should replace the former. But it only takes a moment's reflection to see that nothing, let alone programming, can be theory-free; it's a kind of "view from nowhere" way of thinking, an attempt to resurrect Laplace's demon that ignores everything we've learned in the >200 years since Laplace forwarded that idea. In that respect it's a (neo)reactionary viewpoint, and it's maybe not a coincidence that people with neoreactionary politics tend to hold it. Anyone who needs a more formal argument can read Mel Andrews's The Immortal Science of ML: Machine Learning & the Theory-Free Ideal, or Byung-Chul Han's Psychopolitics (which argues, among other things, that this is a nihilistic).
"Sources tell Windows Central that internal teams are also beginning to push back against excessive integration, and the company may therefore reconsider its stance.
At the very least, Microsoft may dial back Copilot features or remove the chatbot’s branding from apps like Notepad and Paint to make the experience feel more conventional, the report says."
https://www.pcmag.com/news/microsoft-reportedly-plans-to-dial-back-copilot-across-windows-11-apps
In the year of the city 2274, the remnants of human civilization live in a sealed city beneath a cluster of geodesic domes, a utopia run by computer. The citizens live a hedonistic lifestyle, but when they turn 30 must enter the "Carrousel", a public ritual that destroys their bodies, under the pretense they would be "Renewed" or reborn.
In the year of the city 2274, the colony of human beings on Mars live in a sealed city beneath a cluster of geodesic domes, a utopia run by generative AI. The citizens live a hedonistic lifestyle, but when they turn 30 must enter the "Cloud", a public ritual that destroys their bodies, under the pretense their consciousness would be uploaded to a computer and live forever.
We present the first representative international data on firm-level AI use. We survey almost 6000 CFOs, CEOs and executives from stratified firm samples across the US, UK, Germany and Australia. We find four key facts. First, around 70% of firms actively use AI, particularly younger, more productive firms. Second, while over two thirds of top executives regularly use AI, their average use is only 1.5 hours a week, with one quarter reporting no AI use. Third, firms report little impact of AI over the last 3 years, with over 80% of firms reporting no impact on either employment or productivity. Fourth, firms predict sizable impacts over the next 3 years, forecasting AI will boost productivity by 1.4%, increase output by 0.8% and cut employment by 0.7%. We also survey individual employees who predict a 0.5% increase in employment in the next 3 years as a result of AI. This contrast implies a sizable gap in expectations, with senior executives predicting reductions in employment from AI and employees predicting net job creation.