"You need suckers who think AI is effectively magic:<br><br>… people with lower AI literacy perceive AI to be more magical, and thus experience greater feelings of awe when thinking about AI completing tasks, which explain their greater receptivity towards using AI-based products and services. <br><br>AI marketing is 100% about whether the sucker is sufficiently wowed by an impressive demo."<br><br>From <a href="https://econtwitter.net/users/amycastor/statuses/113902204443133686" rel="nofollow" class="ellipsis" title="econtwitter.net/users/amycastor/statuses/113902204443133686"><span class="invisible">https://</span><span class="ellipsis">econtwitter.net/users/amycasto</span><span class="invisible">r/statuses/113902204443133686</span></a><br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/snakeoil/" rel="tag">#SnakeOil</a><br>
genai
<p>[NEW PAPER ALERT!] Our grantees <br>@A__W______O<br> new paper puts forward a vision for balancing the benefits and risks of <a href="/tags/opensource/" rel="tag">#opensource</a> <a href="/tags/genai/" rel="tag">#GenAI</a> (funded by <br><span class="h-card"><a href="https://mastodon.world/@DigInfFund" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>DigInfFund</span></a></span><br>). Drafted by NickBotton<br> & Mathias Vermeulen<br>- a short thread on <a href="/tags/boundariesofopenness/" rel="tag">#boundariesofopenness</a> <a href="/tags/digitalinfrastructure/" rel="tag">#digitalinfrastructure</a></p>
If you turn the sink off when you're done using it to conserve water, or turn the lights off when you leave the room to conserve electricity, why do you use ChatGPT or other AI tools? Using those sorts of tools a few times a month negates whatever you've conserved by being prudent in other parts of your life.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/waste/" rel="tag">#waste</a> <a href="/tags/environment/" rel="tag">#environment</a><br>
<p>Resistance to the coup is the defense of the human against the digital and the democratic against the oligarchic.<br></p>From <a href="https://snyder.substack.com/p/of-course-its-a-coup" rel="nofollow" class="ellipsis" title="snyder.substack.com/p/of-course-its-a-coup"><span class="invisible">https://</span><span class="ellipsis">snyder.substack.com/p/of-cours</span><span class="invisible">e-its-a-coup</span></a><br><br>Defense of the human against the digital has been my mission for some time. Resisting the narratives about how <a href="/tags/llms/" rel="tag">#LLMs</a> "reason", "pass the Turing test", "diagnose illnesses", are "better than humans" in various ways are part of it. Resisting the false narrative that we're on the verge of discovering <a href="/tags/agi/" rel="tag">#AGI</a> is part of it. Allowing these false stories to persist and spread means succumbing to very dark anti-human forces. We're seeing some of the consequences now, and we're seeing how far this might go.<br><br><a href="/tags/uspol/" rel="tag">#USPol</a> <a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/agi/" rel="tag">#AGI</a><br>
<p>Some argue that ai technology is more significant than electricity or the internet, and so it will spread fast. But there is little sign of this. Only 5-6% of American businesses said they used ai to produce goods and services in 2024, according to the country’s Census Bureau.<br></p>From <a href="https://www.economist.com/catch-up-rampage-in-new-orleans-tesla-cybertruck-explosion/2024/12/31/the-ai-productivity-puzzle" rel="nofollow" class="ellipsis" title="www.economist.com/catch-up-rampage-in-new-orleans-tesla-cybertruck-explosion/2024/12/31/the-ai-productivity-puzzle"><span class="invisible">https://</span><span class="ellipsis">www.economist.com/catch-up-ram</span><span class="invisible">page-in-new-orleans-tesla-cybertruck-explosion/2024/12/31/the-ai-productivity-puzzle</span></a> , titled The AI productivity puzzle<br><br>What's the puzzle? It has its uses but generally this technology is not particularly useful for most people. Countless billions of dollars have been spent hyping it up to pretend that isn't the case. But reality matters.<br><br>Nowadays I think of generative AI as a power grab. It's not a useful tool for a lot of people, but it's exceptionally useful to people who want to grab and hold power, or restructure power in their favor.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a><br>
Simulated food does not nourish you any more than simulated thought educates you.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a><br>
Massive compute power applied to massive data sets can produce outcomes that are worse at the task they’re (ostensibly) intended for than much simpler, easier to understand, less wasteful, and less intrusive data-light methods. It requires an extreme form of bias to believe that big compute + big data is always better.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llms/" rel="tag">#LLMs</a> <a href="/tags/tech/" rel="tag">#tech</a> <a href="/tags/dev/" rel="tag">#dev</a> <a href="/tags/datascience/" rel="tag">#DataScience</a> <a href="/tags/science/" rel="tag">#science</a> <a href="/tags/computerscience/" rel="tag">#ComputerScience</a> <a href="/tags/ecologicalrationality/" rel="tag">#EcologicalRationality</a><br>
Edited 128d ago
<p>The present perspective outlines how epistemically baseless and ethically pernicious paradigms are recycled back into the scientific literature via machine learning (ML) and explores connections between these two dimensions of failure. We hold up the renewed emergence of physiognomic methods, facilitated by ML, as a case study in the harmful repercussions of ML-laundered junk science. A summary and analysis of several such studies is delivered, with attention to the means by which unsound research lends itself to social harms. We explore some of the many factors contributing to poor practice in applied ML. In conclusion, we offer resources for research best practices to developers and practitioners.<br></p>From The reanimation of pseudoscience in machine learning and its ethical repercussions here: <a href="https://www.cell.com/patterns/fulltext/S2666-3899(24)00160-0" rel="nofollow" class="ellipsis" title="www.cell.com/patterns/fulltext/S2666-3899(24)00160-0"><span class="invisible">https://</span><span class="ellipsis">www.cell.com/patterns/fulltext</span><span class="invisible">/S2666-3899(24)00160-0</span></a>. It's open access.<br><br>In other words ML--which includes generative AI--is smuggling long-disgraced pseudoscientific ideas back into "respectable" science, and rejuvenating the harms such ideas cause.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llms/" rel="tag">#LLMs</a> <a href="/tags/machinelearning/" rel="tag">#MachineLearning</a> <a href="/tags/ml/" rel="tag">#ML</a> <a href="/tags/aiethics/" rel="tag">#AIEthics</a> <a href="/tags/science/" rel="tag">#science</a> <a href="/tags/pseudoscience/" rel="tag">#pseudoscience</a> <a href="/tags/junkscience/" rel="tag">#JunkScience</a> <a href="/tags/eugenics/" rel="tag">#eugenics</a> <a href="/tags/physiognomy/" rel="tag">#physiognomy</a><br>
Edited 128d ago
Mel Andrews on the connections between a naive belief in scientific objectivity (facts and data are "real" and "correct" and "neutral") and eugenics:<br><p>Francis Galton, pioneering figure of the eugenics movement, believed that good research practice should consist in “gathering as many facts as possible without any theory or general principle that might prejudice a neutral and objective view of these facts” (Jackson et al., 2005). Karl Pearson, statistician and fellow purveyor of eugenicist methods, approached research with a similar ethos: “theorizing about the material basis of heredity or the precise physiological or causal significance of observational results, Pearson argues, will do nothing but damage the progress of the science” (Pence, 2011). In collaborative work with Pearson, Weldon emphasised the superiority of data-driven methods which were capable of delivering truths about nature “without introducing any theory” (Weldon, 1895).<br></p>From The Immortal Science of ML: Machine Learning & the Theory-Free Ideal.<br><br>I've lost the reference, but I suspect it was Meredith Whittaker who's written and spoken about the big data turn at Google, where it was understood that having and collecting massive datasets allowed them to eschew model-building.<br><br>The core idea being critiqued here is that there's a kind of scientific view from nowhere: a theory-free, value-free, model-free, bias-free way of observing the world that will lead to Truth; and that it's the task of the scientist to approximate this view from nowhere as well as possible.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llms/" rel="tag">#LLMs</a> <a href="/tags/science/" rel="tag">#science</a> <a href="/tags/datascience/" rel="tag">#DataScience</a> <a href="/tags/scientificobjectivity/" rel="tag">#ScientificObjectivity</a> <a href="/tags/eugenics/" rel="tag">#eugenics</a> <a href="/tags/viewfromnowhere/" rel="tag">#ViewFromNowhere</a><br>
<p>New Open at Intel Podcast! I spoke with Andrew Brown of Exam Pro about his free generative AI bootcamp for developers, <a href="/tags/deepseek/" rel="tag">#Deepseek</a>, keeping up with AI development and its rapidly moving pace, and a lot more. Check it out!</p><p>Episode: <a href="https://openatintel.podbean.com/e/mastering-generative-ai/" rel="nofollow" class="ellipsis" title="openatintel.podbean.com/e/mastering-generative-ai/"><span class="invisible">https://</span><span class="ellipsis">openatintel.podbean.com/e/mast</span><span class="invisible">ering-generative-ai/</span></a></p><p>Clip: <a href="https://youtube.com/shorts/ofsiK5cF1u0?feature=share" rel="nofollow" class="ellipsis" title="youtube.com/shorts/ofsiK5cF1u0?feature=share"><span class="invisible">https://</span><span class="ellipsis">youtube.com/shorts/ofsiK5cF1u0</span><span class="invisible">?feature=share</span></a></p><p><a href="/tags/opensource/" rel="tag">#OpenSource</a> <a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a></p>
Regarding the last couple boosts: among other downsides, LLMs encourage people to take long-term risks for perceived, but not always actual, short-term gains. They bet the long-term value of their education on a chance at short-term grade inflation, or they bet the long-term security and maintainability of their software codebase on a chance at short-term productivity gains. My read is that more and more data is suggesting that these are bad bets for most people.<br><br>In that respect they're very much like gambling. The messianic fantasies some ChatGPT users have been experiencing fits this picture as well.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/tech/" rel="tag">#tech</a> <a href="/tags/dev/" rel="tag">#dev</a> <a href="/tags/chatgpt/" rel="tag">#ChatGPT</a> <a href="/tags/gpt/" rel="tag">#GPT</a> <a href="/tags/gemini/" rel="tag">#Gemini</a> <a href="/tags/gamblingaddiction/" rel="tag">#GamblingAddiction</a> <a href="/tags/nihilism/" rel="tag">#nihilism</a><br>
Edited 294d ago
One definition of the word "artifice" is: crafty device; an artful, ingenious, or elaborate trick. One would be fully justified interpreting the phrase "artificial intelligence" as an elaborate trick resembling intelligence.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a><br>
Over 2,000 years ago Ovid wrote about a sculptor who fell in love with a statue he carved, imputing the ability to love to an arrangement of rock. Today we impute the ability to think to an arrangement of silicon. Stories of breathing life into non-life have been with us for a very long time, yet somehow we're stuck in the same place.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/agi/" rel="tag">#AGI</a> <a href="/tags/pygmalion/" rel="tag">#Pygmalion</a> <a href="/tags/golem/" rel="tag">#Golem</a> <a href="/tags/pinocchio/" rel="tag">#Pinocchio</a> <a href="/tags/talos/" rel="tag">#Talos</a> <a href="/tags/frankenstein/" rel="tag">#Frankenstein</a><br>
Edited 1y ago
The rhetoric that limiting or banning AI/generative AI/LLM/diffusion model use is "ableist" or "gatekeeping" is the latest desperate attempt to find an angle through which to force this technology into our lives against our collective will. We need to reject this narrative. Common as it is it simply doesn't scan. It reads to me as an attempt to co-opt the language of social justice to shame people into accepting an unjust and largely failing technology that they are rightfully rejecting.<br><br>Think it through. If you don't accept the use of climate-destroying, electricity-and-fresh-water-sapping, job-destroying, economy-thrashing--and yet mediocre or poorly performing!--technology created by multi-trillion-dollar sociopathic entities, then you are preventing people with less privilege than you have from living their best lives. You are preventing them from learning how to code. You are preventing them from obtaining coveted jobs in the tech sector. You are preventing them from having access to information. You, personally, are responsible for all this. Not the multi-trillion-dollar sociopathic entities who've not only created this technology and forced it on us but contributed to creating the less-privileged conditions of the people you are supposedly responsible for with your individual choices. Not the governments that neglected to enforce existing laws that would have prevented such multi-trillion-dollar sociopathic entities from forming in the first place, let alone creating such a technology--while also creating the conditions that led to people being less privileged. No, they are not responsible. You are. I am.<br><br>That doesn't make any sense.<br><br>Neoliberalism's greatest trick has been to shift responsibility for any problems away from the powerful and onto individuals who are not empowered to fix anything, all while convincing everyone that this is right and proper. Large corporations do not cause a plastic pollution problem; you and I do, by not separating our recycling. Large corporations, governments and militaries do not cause CO2 pollution and climate damage; you and I do, by using incandescent lightbulbs and non-electric/non-hybrid cars or eating meat. Lack of regulation and large agribusiness practices are not to blame for poor food quality; you and I are, for buying what they sell instead of going organic and joining a CSA. Etc. ad infinitum. Large, powerful entities routinely generate a problem, then tell you and me that we are responsible for the problem as well as for fixing it. Never mind that these entities could nudge their own behavior a bit and move the needle on the problem far more than masses of people could no matter how organized they were. Never mind that these entities could be constrained from causing such problems in the first place.<br><br>We are watching a new variation of this pattern come into being right in front of our eyes with AI. We should stop accepting these fictions. You are neither ableist nor a gatekeeper for resisting AI. You are, instead, attempting to forestall the further degradation of conditions for everyone, which starts this same cycle anew.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/diffusionmodels/" rel="tag">#DiffusionModels</a> <a href="/tags/neoliberalism/" rel="tag">#neoliberalism</a> <a href="/tags/depoliticization/" rel="tag">#depoliticization</a><br>
Edited 126d ago
<p>Before the automobile industry invented the catalytic converter, the costs of reducing air pollution seemed astronomical, enough to bankrupt the entire industry. After they invented the catalytical converter, the costs were manageable. And they only invented it because they were faced with the threat of being shut down.<br></p>Industries creating harms often claim that controls and regulations are impossible, would bankrupt them, etc., trying to make their existence into a zero-sum game (for some people to have the benefit of our industry, other people must suffer). AI companies claim they must steal copyrighted works because they could not exist otherwise; or be allowed to use as much electricity as they demand in spite of the costs. But it's B.S., and we should stop accepting this rhetoric. Forced to innovate to reduce harms, these industries have innovated, and made themselves even more profitable than they were when they were dragging their feet about it like children who don't want to clean their rooms.<br><br>From <a href="https://lpeproject.org/blog/is-climate-change-an-externality/" rel="nofollow">Is Climate Change An Externality</a><br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/law/" rel="tag">#law</a> <a href="/tags/copyright/" rel="tag">#copyright</a> <a href="/tags/pollution/" rel="tag">#pollution</a> <a href="/tags/climate/" rel="tag">#climate</a> <a href="/tags/innovation/" rel="tag">#innovation</a><br>
Edited 125d ago
<p>Got a website?</p><p>Feel like helping make unauthorized LLM scrapers choke on an infinite sea of garbage, potentially making their models collapse?</p><p>...Then take a look at:<br><a href="https://zadzmo.org/code/nepenthes/" rel="nofollow"><span class="invisible">https://</span>zadzmo.org/code/nepenthes/</a></p><p>PS Do make sure to read the warnings, boost and have fun! 😈 </p><p>.</p><p>Thanks to <span class="h-card"><a href="https://fedi.tfnux.org/@dlatchx" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>dlatchx</span></a></span> for reminding me where to find this!</p><p><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/nepenthes/" rel="tag">#Nepenthes</a> <a href="/tags/llmpoison/" rel="tag">#LLMPoison</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/markov/" rel="tag">#Markov</a></p>
Edited 1y ago
If you created a text corpus consisting only of true, declarative statements in English and trained a large language model on it, a generative AI system built with this trained LLM would still output false statements sometimes.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a><br>
Several of my papers are in that LibGen database Meta used.<br><br>I feel a bunch of ways about it, but one way I feel is that it adds insult to injury. In all but two cases I was required to sign an onerous agreement to get the paper published, handing over rights to a publisher that is continuing to abuse this arrangement (in my view). I did that begrudgingly because I was early in my career and didn't think I had another option. Later I experimented with refusing to sign these agreements and publishers walked back the terms somewhat (I don't know if that's possible now).<br><br>I also feel that the Meta computer scientists responsible for this betrayed their own colleagues, which I find pretty scummy.<br><br>Anyway, I don't consent to any of this. It's been imposed on me and countless other authors.<br><br><a href="/tags/libgen/" rel="tag">#LibGen</a> <a href="/tags/meta/" rel="tag">#meta</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a><br>
<p>We were given a prompt as an invitation to participate in this newsletter: “How are you using AI in the classroom?” While we have accepted this invitation, we are engaging in the most humanistic act we can imagine—refusing the prompt.<br></p>From How We are Not Using AI in the Classroom <a href="https://static1.squarespace.com/static/53a4b792e4b073bf214c0e66/t/67ddcdb4e1ee531df076cb82/1742589366973/ICMA_MarchNewsletter_v7+FINAL.pdf#page=25" rel="nofollow" class="ellipsis" title="static1.squarespace.com/static/53a4b792e4b073bf214c0e66/t/67ddcdb4e1ee531df076cb82/1742589366973/ICMA_MarchNewsletter_v7+FINAL.pdf#page=25"><span class="invisible">https://</span><span class="ellipsis">static1.squarespace.com/static</span><span class="invisible">/53a4b792e4b073bf214c0e66/t/67ddcdb4e1ee531df076cb82/1742589366973/ICMA_MarchNewsletter_v7+FINAL.pdf#page=25</span></a><br><br>A nice articulation of why "incorporating AI" in the classroom is detrimental to education and learning, inducing longterm costs that no perceived benefit of doing so could outweigh.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/chatgpt/" rel="tag">#ChatGPT</a> <a href="/tags/education/" rel="tag">#education</a> <a href="/tags/pedagogy/" rel="tag">#pedagogy</a><br>
It suddenly struck me the other day that generative AI used in government and media has a very Memoirs Found in a Bathtub quality about it. One of the premises of Lem's (amazing) novel is that paper disintegrates for unknown reasons, leaving very few hard records of anything. The book is a mock diary of a person who lived through the disorienting chaos that ensued.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a> <a href="/tags/tech/" rel="tag">#tech</a> <a href="/tags/dev/" rel="tag">#dev</a> <a href="/tags/stanislawlem/" rel="tag">#StanislawLem</a> <a href="/tags/fiction/" rel="tag">#fiction</a> <a href="/tags/scifi/" rel="tag">#SciFi</a> <a href="/tags/sciencefiction/" rel="tag">#ScienceFiction</a> <a href="/tags/cybernetics/" rel="tag">#cybernetics</a><br>
Edited 277d ago
<p>Nothing is more valuable than a clear-headed understanding of which particular lies are most likely to succeed in the present environment, and which are just evanescent byproducts of the generally mendacious atmosphere. Dodge the decoys, save the right kind of energy to counter the real blows. Turning up the heat in lamenting the current crisis risks mistaking a mere mirage for a more substantial threat.<br></p>From “LYING IN POLITICS”: HANNAH ARENDT’S ANTIDOTE TO ANTICIPATORY DESPAIR <a href="https://www.publicbooks.org/lying-in-politics-hannah-arendts-antidote-to-anticipatory-despair/" rel="nofollow" class="ellipsis" title="www.publicbooks.org/lying-in-politics-hannah-arendts-antidote-to-anticipatory-despair/"><span class="invisible">https://</span><span class="ellipsis">www.publicbooks.org/lying-in-p</span><span class="invisible">olitics-hannah-arendts-antidote-to-anticipatory-despair/</span></a><br><br>I found this to be an excellent and orienting read for anyone concerned about the US right now.<br><br>While folks are understandably worried about this administration, which has already inflicted significant harms, it's important to stay level headed and aligned with the actual facts and truths. That's our primary defense against what's happening which, as Arendt argued in the 1970s, hinges on a process of defactualization. Shouting "fascism!" and drawing analogies with the Nazis, as this essay argues, is going too far, turning up the heat about a mirage. Much as we wish they'd do better--and they could do better--in point of fact we do still have a functioning judicial system and media ecosystem, and there are significant numbers of people, including politicians and judges, fully willing to challenge every lie the administration emits. As dangerous as these times are we are nowhere near as far along the authoritarian trajectory as shouting "fascism!" makes it sound, and we should really stop doing that. Doing so grants bluffs and bluster more power than it actually has, which is ultimately a form of surrender. We should recognize our own strength and save it for real threats.<br><br>This is one of many reasons why I relentlessly call BS on generative AI and the claims about it coming out of the technology sector. There is a defactualization process at work there that plays into the broader political one; some of the individuals enacting this defactualization in tech are personally involved in doing the same in the federal government. If you've been watching you probably know some of their names and the companies they came from. Generative AI is itself a defactualization machine; that's one of its primary appeals to this crew.<br><br>Dodge the decoys and save your energy for the real blows.<br><br><a href="/tags/uspol/" rel="tag">#USPol</a> <a href="/tags/hannaharendt/" rel="tag">#HannahArendt</a> <a href="/tags/authoritarianism/" rel="tag">#authoritarianism</a> <a href="/tags/despair/" rel="tag">#despair</a> <a href="/tags/tech/" rel="tag">#tech</a> <a href="/tags/dev/" rel="tag">#dev</a> <a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/llm/" rel="tag">#LLM</a><br>
Edited 277d ago
<p>Bitwarden 发布用于操作 Bitwarden 密码库的 MCP 服务器。<br><br>此 MCP 服务器的功能包括锁定和解锁密码库以及对密码库进行增删改查等。<br><br><a href="https://github.com/bitwarden/mcp-server" rel="nofollow" class="ellipsis" title="github.com/bitwarden/mcp-server"><span class="invisible">https://</span><span class="ellipsis">github.com/bitwarden/mcp-serve</span><span class="invisible">r</span></a><br><br><a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/bitwarden/" rel="tag">#Bitwarden</a><br><br><a href="https://t.me/outvivid/4732" rel="nofollow">Telegram 原文</a></p>
So-called "generative" AI is the opposite of generative. The word "generative" in the name "generative AI" is a piece of jargon that is, even then, arguably misused in some applications. But as an anti-imagination, remix-only technology, it's just not generative at all, and cannot be. On top of this it impedes creative expression in multiple ways: co-opting it, devaluing it, directing energy away from it, etc.<br><br><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/generativeai/" rel="tag">#GenerativeAI</a> <a href="/tags/creativity/" rel="tag">#creativity</a><br>
<p>I watched someone "vibe code" for an hour and now I think "slot machine coding" is a more appropriate name. "Let us pull the lever again and see if the code gets better with this prompt."</p><p><a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a> <a href="/tags/vibecoding/" rel="tag">#VibeCoding</a> <a href="/tags/programming/" rel="tag">#Programming</a></p>
<p>65% of Wikimedia's most expensive traffic comes from bots.</p><p>"Since January 2024, we have seen the bandwidth used for downloading multimedia content grow by 50%. This increase is not coming from human readers, but largely from automated programs that scrape the Wikimedia Commons image catalog of openly licensed images to feed images to AI models."</p><p><a href="https://diff.wikimedia.org/2025/04/01/how-crawlers-impact-the-operations-of-the-wikimedia-projects/" rel="nofollow" class="ellipsis" title="diff.wikimedia.org/2025/04/01/how-crawlers-impact-the-operations-of-the-wikimedia-projects/"><span class="invisible">https://</span><span class="ellipsis">diff.wikimedia.org/2025/04/01/</span><span class="invisible">how-crawlers-impact-the-operations-of-the-wikimedia-projects/</span></a></p><p><a href="/tags/wikimedia/" rel="tag">#wikimedia</a> <a href="/tags/wikipedia/" rel="tag">#wikipedia</a> <a href="/tags/ai/" rel="tag">#AI</a> <a href="/tags/genai/" rel="tag">#GenAI</a></p>