시가 총액
24시간 볼륨
10071
암호화폐
58.26%
Bitcoin 공유

AI amplifies intelligence, but only if you’re already smart, says Balaji

AI amplifies intelligence, but only if you’re already smart, says Balaji


Cryptopolitan
2025-08-03 15:37:54

Balaji says AI is polytheistic, not monotheistic, meaning there isn’t one super-intelligent system ruling over all. Instead, there are many strong AIs, each backed by different players. In his words: “We empirically observe polytheistic AI… rather than a single all-powerful model.” That wipes out the fantasy of one AGI turning the world into paperclips. What we have is a balance of power between many human/AI combinations, not one dominant force. He says AI right now only works “middle-to-middle.” It doesn’t handle full jobs from start to finish. You still need people at both ends; one to prompt the AI and another to check what it outputs. So all the real costs and effort have shifted to the edges: prompting and verifying. That’s where companies are now spending their money, even though AI speeds up the center of the process. AI makes you smarter, but only if you are already smart Balaji doesn’t call it artificial intelligence. He calls it amplified intelligence. Because the AI isn’t acting on its own, it’s not fully agentic, it doesn’t set long-term goals, and it can’t verify its own output. “You have to spend a lot of effort on prompting, verifying, and system integrating,” he said. So, how useful AI is depends on how smart you are. If you give it bad instructions, it gives you bad results. He also says AI doesn’t replace you, it just helps you do more jobs. With it, you can fake your way into being a passable UI designer or game animator. But don’t expect expert quality. AI makes you good enough to be average, not excellent. For real quality, you still need specialists. There’s another job it does take, and that’s the job of the last version of itself. Midjourney pushed Stable Diffusion out of the workflow. GPT-4 took GPT-3’s spot. As Balaji puts it, “AI doesn’t take your job, it takes the job of the previous AI.” Once companies create a space for AI in a workflow, like image creation or code generation, that space stays filled. It just gets handed off to the newer, better model. He also says AI is better at visuals than text. Easier for humans to judge a picture than to verify a wall of code or paragraphs of text. “User interfaces and images can easily be verified by the human eye,” Balaji says. With text, it’s slower and more costly for people to check the accuracy. Crypto limits what AI can and can’t do Balaji draws a line between how AI works and how crypto works. AI is probabilistic; it guesses based on patterns. But crypto is deterministic; it runs on hard, provable math. So crypto becomes a boundary that AI can’t easily cross. AI might break captchas, but it can’t fake a blockchain balance. “AI makes everything fake, but crypto makes it real again,” he says. AI might solve simple equations, but cryptographic equations still block it. There’s also already a version of killer AI out there. It’s drones. “Every country is pursuing it,” Balaji says. It’s not image generators or chatbots that pose the threat, it’s autonomous weapons. That’s the area where AI’s real-world impact is already lethal. He argues that AI is decentralizing, not centralizing. Right now, there are tons of AI companies, not just one or two giants. Small teams with good tools can do a lot. And open-source models are improving fast. So even without massive budgets, small groups can build strong AI systems. That breaks up power instead of concentrating it. Balaji also rejects the idea that more AI is always better. He says the ideal amount is not zero, not 100%. “0% AI is slow, but 100% AI is slop.” Real value lives in between. Too little AI means you’re behind. Too much, and quality falls apart. He compares it to the Laffer Curve, a concept in economics that says there’s a sweet spot between extremes. In his final argument , he lays out why today’s systems are constrained AIs, not godlike machines. He breaks that into four kinds of limits: Economic: Every API call costs money. Using AI at scale isn’t free. Mathematical: AI can’t solve chaotic or cryptographic problems. Practical: You still need humans to prompt and verify results. AI can’t complete the full task alone. Physical: AI doesn’t gather real-world data on its own. It can’t sense its environment or interpret it like people do. He ends by saying these limits might be removed later. It’s possible that future researchers can merge System 1 thinking (fast and intuitive, like AI) with System 2 thinking, which is more logical and careful, like traditional computing. But right now, that’s just theory. It’s still an open problem. There is no all-knowing AI. There are just tools (expensive, limited, competitive tools) that do what they’re told and need constant checking. KEY Difference Wire helps crypto brands break through and dominate headlines fast


면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.