<p>"One notable MIT study found that 95 percent of companies that integrated AI saw zero meaningful growth in revenue. For coding tasks, one of AI’s most widely hyped applications, another study showed that programmers who used AI coding tools actually became slower at their jobs.”</p><p><p>A lot of generative AI stuff isn’t really working,” Gownder told The Register. “And I’m not just talking about your consumer experience, which has its own gaps, but the MIT study that suggested that 95 percent of all generative AI projects are not yielding a tangible [profit and loss] benefit. So no actual [return on investment.]”</p></p><p><a href="https://futurism.com/artificial-intelligence/ai-failing-boost-productivity" rel="nofollow" class="ellipsis" title="futurism.com/artificial-intelligence/ai-failing-boost-productivity"><span class="invisible">https://</span><span class="ellipsis">futurism.com/artificial-intell</span><span class="invisible">igence/ai-failing-boost-productivity</span></a></p><p><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/tech/" rel="tag">#Tech</a> <a href="/tags/genai/" rel="tag">#GenAI</a></p>
genai
<p>Learning a new AI-related term: workslop.</p><p>"The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver."</p><p><a href="https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity" rel="nofollow" class="ellipsis" title="hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity"><span class="invisible">https://</span><span class="ellipsis">hbr.org/2025/09/ai-generated-w</span><span class="invisible">orkslop-is-destroying-productivity</span></a></p><p><a href="/tags/workslop/" rel="tag">#workslop</a> <a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/slop/" rel="tag">#slop</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/technology/" rel="tag">#technology</a> <a href="/tags/news/" rel="tag">#news</a> <a href="/tags/technews/" rel="tag">#TechNews</a> <a href="/tags/study/" rel="tag">#study</a></p>
<p>Finally, after an epic battle against Gitea CI:</p><p><a href="https://areweaiyet.taffer.ca" rel="nofollow"><span class="invisible">https://</span>areweaiyet.taffer.ca</a></p><p>The definitive answer to the question, "Is AI intelligent yet?"</p><p><a href="/tags/ai/" rel="tag">#ai</a> <a href="/tags/genai/" rel="tag">#genai</a> <a href="/tags/agi/" rel="tag">#agi</a></p>
If current-generation LLM-based chatbots can drive people to commit crimes or even take their own lives, what do you suppose Neuralink would do to people?<br><br>The first thing I said to the person who suggested to me that human brains might be directly hooked to a computer, whenever that was, was "everyone will go insane immediately". I still believe that, but now we're beginning to see evidence.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/neuralink/" rel="tag">#Neuralink</a> <a href="/tags/tech/" rel="tag">#tech</a> <a href="/tags/dev/" rel="tag">#dev</a> <a href="/tags/hci/" rel="tag">#HCI</a> <a href="/tags/brainimplants/" rel="tag">#BrainImplants</a><br>
Am I to understand from this that SearXNG is in the process of becoming AI poisoned?<br><br><p><a href="https://github.com/searxng/searxng/issues/2163" rel="nofollow" class="ellipsis" title="github.com/searxng/searxng/issues/2163"><span class="invisible">https://</span><span class="ellipsis">github.com/searxng/searxng/iss</span><span class="invisible">ues/2163</span></a><br><a href="https://github.com/searxng/searxng/issues/2008" rel="nofollow" class="ellipsis" title="github.com/searxng/searxng/issues/2008"><span class="invisible">https://</span><span class="ellipsis">github.com/searxng/searxng/iss</span><span class="invisible">ues/2008</span></a><br><a href="https://github.com/searxng/searxng/issues/2273" rel="nofollow" class="ellipsis" title="github.com/searxng/searxng/issues/2273"><span class="invisible">https://</span><span class="ellipsis">github.com/searxng/searxng/iss</span><span class="invisible">ues/2273</span></a></p>The last issue hasn't been active since 2023 but the 1st one has been active recently and the middle one last summer.<br><br><a href="/tags/searx/" rel="tag">#SearX</a> <a href="/tags/searxng/" rel="tag">#SearXNG</a> <a href="/tags/searchengines/" rel="tag">#SearchEngines</a> <a href="/tags/alternatesearchengines/" rel="tag">#AlternateSearchEngines</a> <a href="/tags/metasearchengines/" rel="tag">#MetaSearchEngines</a> <a href="/tags/web/" rel="tag">#web</a> <a href="/tags/dev/" rel="tag">#dev</a> <a href="/tags/tech/" rel="tag">#tech</a> <a href="/tags/foss/" rel="tag">#FOSS</a> <a href="/tags/opensource/" rel="tag">#OpenSource</a> <a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/aipoisoning/" rel="tag">#AIPoisoning</a> <a href="/tags/aislop/" rel="tag">#AISlop</a> <a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/chatgpt/" rel="tag">#ChatGPT</a> <a href="/tags/claude/" rel="tag">#Claude</a> <a href="/tags/perplexity/" rel="tag">#Perplexity</a><br>
Edited 76d ago
I guess we shouldn't be surprised, but no way:<br><br>AAAI Launches AI-Powered Peer Review Assessment System<br><br><a href="https://aaai.org/aaai-launches-ai-powered-peer-review-assessment-system/" rel="nofollow" class="ellipsis" title="aaai.org/aaai-launches-ai-powered-peer-review-assessment-system/"><span class="invisible">https://</span><span class="ellipsis">aaai.org/aaai-launches-ai-powe</span><span class="invisible">red-peer-review-assessment-system/</span></a><br><br>No.<br><br>Speaking as someone who has co-organized an AAAI symposium and among other things did a bunch of editorial work.<br><br><a href="/tags/noai/" rel="tag">#NoAI</a> <a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llms/" rel="tag">#LLMs</a> <a href="/tags/aioutofscience/" rel="tag">#aIOutOfScience</a> <a href="/tags/science/" rel="tag">#science</a> <a href="/tags/computerscience/" rel="tag">#ComputerScience</a> <a href="/tags/peerreview/" rel="tag">#PeerReview</a><br>
<p>Folks, I need to talk about this Grok/X/ <a href="/tags/ai/" rel="tag">#AI</a> thing. I will not give details. But the vile aspects of these news, added to the downfall of mainstream journalism quoting Grok as if it was a real entity... I think all of this should give people pause. Like, where are we fucking going. It's been several years now and we have not seen any glimpse of a bright future with this <a href="/tags/genai/" rel="tag">#GenAI</a> shitshow. Consider this an exercise of screaming into the void 🧵</p>
Edited 93d ago
<p>I wanted to share my opinions on <a href="/tags/ai/" rel="tag">#AI</a>, specifically <a href="/tags/genai/" rel="tag">#genAI</a>. Thanks for reading!<br><a href="https://medium.com/@santiagorhenals/my-opinions-on-ai-us-00d3b4168e87" rel="nofollow" class="ellipsis" title="medium.com/@santiagorhenals/my-opinions-on-ai-us-00d3b4168e87"><span class="invisible">https://</span><span class="ellipsis">medium.com/@santiagorhenals/my</span><span class="invisible">-opinions-on-ai-us-00d3b4168e87</span></a></p>
What they don't tell you, what you have to figure out for yourself, is that the "things" in the slogan "move fast and break things" is us. It's people.<br><br><a href="/tags/tech/" rel="tag">#tech</a> <a href="/tags/dev/" rel="tag">#dev</a> <a href="/tags/siliconvalley/" rel="tag">#SiliconValley</a> <a href="/tags/socialmedia/" rel="tag">#SocialMedia</a> <a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/etc/" rel="tag">#etc</a><br>
Would the perception that an LLM chatbot is speaking to you be dissolved if they were deterministic instead of stochastic?<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/chatgpt/" rel="tag">#ChatGPT</a> <a href="/tags/claude/" rel="tag">#Claude</a><br>
I was just watching a YouTube video with I presume auto-generated captions, and the speaker said "the world doesn't trust the US" but the caption read "the world doesn't trust the AI".<br><br>Make of it what you will.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/uspol/" rel="tag">#USPol</a> <a href="/tags/usai/" rel="tag">#USAI</a><br>
<p>"There have been several incidents where interaction with a chatbot has been cited as a direct or contributing factor in a person's suicide or other fatal outcome. In some cases, legal action was taken against the companies that developed the AI involved."</p><p>CW: Death, of course.</p><p><a href="https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots" rel="nofollow" class="ellipsis" title="en.wikipedia.org/wiki/Deaths_linked_to_chatbots"><span class="invisible">https://</span><span class="ellipsis">en.wikipedia.org/wiki/Deaths_l</span><span class="invisible">inked_to_chatbots</span></a></p><p><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/death/" rel="tag">#death</a></p>
<p>"Turning to generative AI for your filmmaking, even to make a point about its own uselessness, is to accept defeat."</p><p><a href="https://www.avclub.com/deepfaking-sam-altman-lets-ai-direct-the-movie-which-is-accepting-defeat" rel="nofollow" class="ellipsis" title="www.avclub.com/deepfaking-sam-altman-lets-ai-direct-the-movie-which-is-accepting-defeat"><span class="invisible">https://</span><span class="ellipsis">www.avclub.com/deepfaking-sam-</span><span class="invisible">altman-lets-ai-direct-the-movie-which-is-accepting-defeat</span></a></p><p><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/creativity/" rel="tag">#creativity</a></p>
This article in The Register about "Poison Fountain" looks to be crithype, and the Poison Fountain project looks to be misdirection, scam, art project, or some other thing, but almost surely not a serious data poisoning proposal.<br><br>AI industry insiders launch site to poison the data that feeds them: <a href="https://www.theregister.com/2026/01/11/industry_insiders_seek_to_poison/" rel="nofollow" class="ellipsis" title="www.theregister.com/2026/01/11/industry_insiders_seek_to_poison/"><span class="invisible">https://</span><span class="ellipsis">www.theregister.com/2026/01/11</span><span class="invisible">/industry_insiders_seek_to_poison/</span></a><br><br><a href="https://rnsaffn.com/poison3/" rel="nofollow">Poison Fountain</a> starts with "We agree with Geoffrey Hinton: machine intelligence is a threat to the human species". This is a tarball of wrong. (1)<br><br>The rest of the website is absurd, and the "Poison Fountain Usage" list doesn't make any sense. There are far more efficient and safer ways to poison data that don't require you to proxy content for an unknown third party. Some of these are implemented in software, as opposed to <ul> in HTML. That bullet list reads like an amateur riffing on what they read about AI web scrapers, not like industry insiders with detailed information about how training works.<br><br>Recommend viewing the top level <a href="https://rnsaffn.com" rel="nofollow"><span class="invisible">https://</span>rnsaffn.com</a> , which I suspect The Register may not have done.<br><br>The Register:<br><p>Our source said that the goal of the project is to make people aware of AI's Achilles' Heel – the ease with which models can be poisoned – and to encourage people to construct information weapons of their own.<br></p>Data poisoning is not easy, Anthropic's "article" notwithstanding. Why would we trust Anthropic to publicly reveal ways to subvert their technology anyway?<br><br>None of this passes a smell test. Crithype (and poor fact checking, it seems) from The Register it is.<br><br><br>(1) Hinton stands to gain professionally and financially from people believing this. Hinton personally bears a large amount of responsibility for setting off this so-called species level danger. Hinton, like all of us, cannot possibly know whether "machine intelligence" is even possible, let alone dangerous to people; that's a fanciful notion that serves the agendas of the wealthy and powerful quite well. In other words, crithype. Etc.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/anthropic/" rel="tag">#Anthropic</a> <a href="/tags/poisonfountain/" rel="tag">#PoisonFountain</a> <a href="/tags/uncriticalreporting/" rel="tag">#UncriticalReporting</a> <a href="/tags/crithype/" rel="tag">#crithype</a> <a href="/tags/theregister/" rel="tag">#TheRegister</a><br>
<p>I wish we had more people writing more sophisticated concerns about the harms of AI.</p><p>"Slop" criticism is important because I think many of us feel we are being gaslit into believing that generative AI is currently creating quality creative output, while Al (henceforth Alfred) is overwhelmingly creating mediocre quality creative work.</p><p>Every past winter, Alfred survives in a more focused and refined form. Eliza was a toy chatbot of the 1960s, but what emerged from that is expanded investment in things like Natural Language Processing and Markov chains.</p><p>Markov chains have always been powerful, but through computing applications, Markov chains led to advancing capabilities in weather prediction, financial modeling, and eventually Google PageRank and bioinformatics/biostatistics (BLOSUM/BLAST for analyzing/predicting/correlating amino acid chain similarities).</p><p>The next boom and winter led Alfred to popularize classic machine learning into practical applications in consumer products: recommendation engines and clustering algorithms that formed the core of products from the Netflix prize to Spotify, Pandora, Amazon, and eventually virtually every consumer retailer has multiple machine learning products supporting everything from suggested products to search results to their own logistics and sales and financial modeling. </p><p>Learn from the past. Prepare for the future. Al will learn how to spell strawberry, write basic documents and code more effectively, and make fewer basic mistakes. Think beyond that. What are the emerging harms that come AFTER all of that?<br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/generative_ai_risks/" rel="tag">#generative_ai_risks</a> <a href="/tags/generative_ai_concerns/" rel="tag">#generative_ai_concerns</a></p>
Edited 82d ago
<p>In which I build Unicode character tables and fail to serialize large automata, “Losing 1½ Million Lines of Go“:<br><a href="https://www.tbray.org/ongoing/When/202x/2026/01/14/Unicode-Properties" rel="nofollow" class="ellipsis" title="www.tbray.org/ongoing/When/202x/2026/01/14/Unicode-Properties"><span class="invisible">https://</span><span class="ellipsis">www.tbray.org/ongoing/When/202</span><span class="invisible">x/2026/01/14/Unicode-Properties</span></a></p><p>(And in which I find myself sliding into the <a href="/tags/genai/" rel="tag">#GenAI</a>-is-ok-for-coding camp.)</p>
I've been playing around with this set of ideas and questions:<br><br>An image of a cat is not a cat, no matter how many pixels it has. A video of a cat is not a cat, no matter the framerate. An interactive 3-d model of a cat is not a cat, no matter the number of voxels or quality of dynamic lighting and so on. In every case, the computer you're using to view the artifact also gives you the ability to dispel the illusion. You can zoom a picture and inspect individual pixels, pause a video and step through individual frames, or distort the 3-d mesh of the model and otherwise modify or view its vertices and surfaces, things you can't do to cats even by analogy. As nice or high-fidelity as the rendering may be, it's still a rendering, and you can handily confirm that if you're inclined to.<br><br>These facts are not specific to images, videos, or 3-d models of cats. The are necessary features of digital computers. Even theoretically. The computable real numbers form a countable subset of the uncountably infinite set of real numbers that, for now at least, physics tells us our physical world embeds in. Georg Cantor showed us there's an infinite difference between the two; and Alan Turing showed us that it must be this way. In fact it's a bit worse than this, because (most) physics deals in continua, and the set of real numbers, big as it is, fails to have a few properties continua are taken to have. C.S. Peirce said that continua contain such multitudes of points smashed into so little space that the points fuse together, becoming inseparable from one another (by contrast we can speak of individual points within the set of real numbers). Time and space are both continua in this way.<br><br>Nothing we can represent in a computer, even in a high-fidelity simulation, is like this. Temporally, computers have a definite cha-chunk to them: that's why clock speeds of CPUs are reported. As rapidly as these oscillations happen relative to our day-to-day experience, they are still cha-chunk cha-chunk cha-chunk discrete turns of a ratchet. There's space in between the clicks that we sometimes experience as hardware bugs, hacks, errors: things with negative valence that we strive to eliminate or ignore, but can never fully. Likewise, even the highest-resolution picture still has pixels. You can zoom in and isolate them if you want, turning the most photorealistic image into a Lite Brite. There's space between the pixels too, which you can see if you take a magnifying glass to your computer monitor, even the retina displays, or if you look at the data within a PNG.<br><br>Images have glitches (e.g., the aliasing around hard edges old JPEGs had). Videos have glitches (e.g., those green flashes or blurring when keyframes are lost). Meshes have glitches (e.g., when they haven't been carefully topologized and applied textures crunch and distort in corners). 3-d interactive simulations have <a href="https://www.youtube.com/channel/UCto7D1L-MiRoOziCXK9uT5Q" rel="nofollow">unending glitches</a>. The glitches manifest differently, but they're always there, or lurking. <a href="https://www.dullien.net/thomas/weird-machines-exploitability.pdf" rel="nofollow">They are reminders</a>.<br><br>With all that said: why would anyone believe generative AI could ever be intelligent? The only instances of intelligence we know inhabit the infinite continua of the physical world with its smoothly varying continuum of time (so science tells us anyway). Wouldn't it be more to the point to call it an intelligence simulation, and to mentally maintain the space between it and "actual" intelligence, whatever that is, analogous to how we maintain the mental space between a live cat and a cat video?<br><br>This is not the say there's something essential about "intelligence", but rather that there are unanswered questions here that seem important. It doesn't seem wise to assume they've been answered before we're even done figuring out how to formulate them well.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llms/" rel="tag">#LLMs</a><br>
<p>A mere week into 2026, OpenAI launched “ChatGPT Health” in the United States, asking users to upload their personal medical data and link up their health apps in exchange for the chatbot’s advice about diet, sleep, work-outs, and even personal insurance decisions. <br></p>(from <a href="https://buttondown.com/maiht3k/archive/chatgpt-wants-your-health-data/" rel="nofollow" class="ellipsis" title="buttondown.com/maiht3k/archive/chatgpt-wants-your-health-data/"><span class="invisible">https://</span><span class="ellipsis">buttondown.com/maiht3k/archive</span><span class="invisible">/chatgpt-wants-your-health-data/</span></a>).<br><br>This is the probably inevitable endgame of FitBit and other "measured life" technologies. It isn't about health; it's about mass managing bodies. It's a short hop from there to mass managing minds, which this "psychologized" technology is already being deployed to do (AI therapists and whatnot). Fully corporatized human resource management for the leisure class (you and I are not the intended beneficiaries, to be clear; we're the mass).<br><br>Neural implants would finish the job, I guess. It's interesting how the tech sector pushes its tech closer and closer to the physical head and face. Eventually the push to penetrate the head (e.g. Neuralink) should intensify. Always with some attached promise of convenience, privilege, wealth, freedom of course.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/openai/" rel="tag">#OpenAI</a> <a href="/tags/chatgpt/" rel="tag">#ChatGPT</a> <a href="/tags/health/" rel="tag">#health</a> <a href="/tags/healthtech/" rel="tag">#HealthTech</a><br>
Edited 80d ago
This is a long time coming. I've been posting about the decline of arXiv's CS category for a long time now, and even had a few conversations with someone I know who works there about it. Personally, I think the slop started in 2018--prior to generative AI slop--when the CS category at arXiv began the unsustainable exponential growth in submissions that has continued till today. An increasing number of what amounted to corporate whitepapers and other marketing materials were being posted on arXiv to give them the appearance of scientific credibility. There was a fairly clear arXiv-to-Nature pipeline. Citation counts were pumped as some of the scientometric services count arXiv "articles" as citations, and some researchers adopted the bad scholarly habit of citing arXiv preprints instead of the final publication. It was and still is a mess. My understanding is that arXiv was meant as a place for people to put high-quality but pre-publication articles, but at least in the CS category it's drifted quite far from that.<br><br>I gather they've finally taken this measure because of the preponderance AI-generated slop, but with any luck these other issues will improve too. The arXiv press release states “Review/survey articles or position papers submitted to arXiv without this documentation will be likely to be rejected and not appear on arXiv” so it does sound like they are acknowledging the other problems and intend to enforce their rules more strictly in the future.<br><br>"arXiv says it will no longer accept Computer Science papers that are still under review due to the wave of AI-generated ones it has received."<br>From <a href="https://infosec.exchange/users/josephcox/statuses/115486903712973154" rel="nofollow" class="ellipsis" title="infosec.exchange/users/josephcox/statuses/115486903712973154"><span class="invisible">https://</span><span class="ellipsis">infosec.exchange/users/josephc</span><span class="invisible">ox/statuses/115486903712973154</span></a><br><br><a href="/tags/arxiv/" rel="tag">#arXiv</a> <a href="/tags/preprint/" rel="tag">#preprint</a> <a href="/tags/cs/" rel="tag">#CS</a> <a href="/tags/spam/" rel="tag">#spam</a> <a href="/tags/aislop/" rel="tag">#AISlop</a> <a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a><br>
Edited 153d ago
Since I'm job and work hunting I tend to see the absurd new job titles that are bouncing around in the tech sector. The latest, which I've seen twice today, is "artificial general intelligence engineer" or some permutation thereof. I do my best to spend the minimum possible time on these and have no guess about whether they're legitimate.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/agi/" rel="tag">#AGI</a><br>
I put the text below on LinkedIn in response to a post there and figured I'd share it here too because it's a bit of a step from what I've been posting previously on this topic and might be of some use to someone.<br><br>In retrospect I might have written non-sense in place of nonsense.<br><br>If you're in tech the Han reference might be a bit out of your comfort zone, but Andrews is accessible and measured.<br><br><br><p>It's nonsense to say that coding will be replaced with "good judgment". There's a presupposition behind that, a worldview, that can't possibly fly. It's sometimes called the theory-free ideal: given enough data, we don't need theory to understand the world. It surfaces in AI/LLM/programming rhetoric in the form that we don't need to code anymore because LLM's can do most of it. Programming is a form of theory-building (and understanding), while LLMs are vast fuzzy data store and retrieval systems, so the theory-free ideal dictates the latter can/should replace the former. But it only takes a moment's reflection to see that nothing, let alone programming, can be theory-free; it's a kind of "view from nowhere" way of thinking, an attempt to resurrect Laplace's demon that ignores everything we've learned in the >200 years since Laplace forwarded that idea. In that respect it's a (neo)reactionary viewpoint, and it's maybe not a coincidence that people with neoreactionary politics tend to hold it. Anyone who needs a more formal argument can read Mel Andrews's The Immortal Science of ML: Machine Learning & the Theory-Free Ideal, or Byung-Chul Han's Psychopolitics (which argues, among other things, that this is a nihilistic).<br></p><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/coding/" rel="tag">#coding</a> <a href="/tags/dev/" rel="tag">#dev</a> <a href="/tags/tech/" rel="tag">#tech</a> <a href="/tags/softwaredevelopment/" rel="tag">#SoftwareDevelopment</a> <a href="/tags/programming/" rel="tag">#programming</a> <a href="/tags/nihilism/" rel="tag">#nihilism</a> <a href="/tags/linkedin/" rel="tag">#LinkedIn</a><br>
I am astonished to have bookmarked a message from the Pope in my pile of AI-related links.<br><br><a href="https://www.vatican.va/content/leo-xiv/en/messages/communications/documents/20260124-messaggio-comunicazioni-sociali.html" rel="nofollow">MESSAGE OF HIS HOLINESS POPE LEO XIV FOR THE 60TH WORLD DAY OF SOCIAL COMMUNICATIONS</a><br><br>His emphasis on face and voice is good.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/popeleo/" rel="tag">#PopeLeo</a><br>
<p>"Sources tell Windows Central that internal teams are also beginning to push back against excessive integration, and the company may therefore reconsider its stance.</p><p>At the very least, Microsoft may dial back Copilot features or remove the chatbot’s branding from apps like Notepad and Paint to make the experience feel more conventional, the report says."</p><p><a href="https://www.pcmag.com/news/microsoft-reportedly-plans-to-dial-back-copilot-across-windows-11-apps" rel="nofollow" class="ellipsis" title="www.pcmag.com/news/microsoft-reportedly-plans-to-dial-back-copilot-across-windows-11-apps"><span class="invisible">https://</span><span class="ellipsis">www.pcmag.com/news/microsoft-r</span><span class="invisible">eportedly-plans-to-dial-back-copilot-across-windows-11-apps</span></a></p><p><a href="/tags/news/" rel="tag">#news</a> <a href="/tags/technews/" rel="tag">#TechNews</a> <a href="/tags/technology/" rel="tag">#technology</a> <a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/microsoft/" rel="tag">#microsoft</a></p>
I'm tinkering with an argument based on algorithmic complexity that if it were possible to make something like an "automated mathematician" or "automated scientist", then these would be expected to eventually produce outputs that we humans would be unable to distinguish from random noise.<br><br>Getting the whole argument just right is fiddly, but the basic idea is this. You feed some kind of theory into the AM/AS, which is a black box. It churns on this and spits out a result, which is added to the theory (I'm neglecting the case that the result is inconsistent with the theory). It can now churn on theory + result 1. For any given and potentially very large N, after doing this long enough, it's churning on theory + result 1 + result 2 + ... + result N. Whatever it spits out will be dependent in particular on results 1 - N. When N is large enough, unless you know these results you will not be able to understand what it outputs because the output will almost surely depend critically on one or more of results 1 - N. In other words, the output will look like noise to you. If the AM/AS is appreciably faster at producing results than people are at understanding them, there will be an N beyond which no one can understand the output up to that point. It'll become indistinguishable (unable to be distinguished) from random noise.<br><br>If you're into software development, this would be analogous to a software system that generates syntactically-correct code and then adds that code as a new call in a growing software library. If you were to run this long enough, virtually all the programs it generated that were short enough for human beings to have any hope of reading and understanding would consist almost entirely of library calls to code generated by the system. You'd have no idea what any of this code did unless you studied the library calls, which you wouldn't be able to do beyond a certain scale. If the system were expanding the library faster than you could read and understand it, there'd be no hope at all.<br><br>I'll leave it as an exercise to the reader whether this is a desirable thing to do and whether it's happened yet. I would offer, though, a question to ponder: what reason is there to believe that a random number generator hooked up to an inscrutable interpreter produces human flourishing, for any given meaning of "human flourishing" you care to use?<br><br><a href="/tags/tech/" rel="tag">#tech</a> <a href="/tags/dev/" rel="tag">#dev</a> <a href="/tags/mathematics/" rel="tag">#mathematics</a> <a href="/tags/automatedmathematician/" rel="tag">#AutomatedMathematician</a> <a href="/tags/automatedscientist/" rel="tag">#AutomatedScientist</a> <a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/thoughtexperiment/" rel="tag">#ThoughtExperiment</a><br>
Edited 56d ago
Compare and contrast<br><br>This:<br><p>In the year of the city 2274, the remnants of human civilization live in a sealed city beneath a cluster of geodesic domes, a utopia run by computer. The citizens live a hedonistic lifestyle, but when they turn 30 must enter the "Carrousel", a public ritual that destroys their bodies, under the pretense they would be "Renewed" or reborn.<br></p>(<a href="https://en.wikipedia.org/wiki/Logan's_Run_(film)" rel="nofollow">Logans Run</a>)<br><br>and this:<br><p>In the year of the city 2274, the colony of human beings on Mars live in a sealed city beneath a cluster of geodesic domes, a utopia run by generative AI. The citizens live a hedonistic lifestyle, but when they turn 30 must enter the "Cloud", a public ritual that destroys their bodies, under the pretense their consciousness would be uploaded to a computer and live forever.<br></p><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/mars/" rel="tag">#Mars</a> <a href="/tags/eugenics/" rel="tag">#eugenics</a> <a href="/tags/logansrun/" rel="tag">#LogansRun</a> <a href="/tags/sciencefiction/" rel="tag">#ScienceFiction</a> <a href="/tags/dystopia/" rel="tag">#dystopia</a><br>
🇪🇺