🎉 #Gate xStocks Trading Share# Posting Event Is Ongoing!
📝 Share your trading experience on Gate Square to unlock $1,000 rewards!
🎁 5 top Square creators * $100 Futures Voucher
🎉 Share your post on X – Top 10 posts by views * extra $50
How to Participate:
1️⃣ Follow Gate_Square
2️⃣ Make an original post (at least 20 words) with #Gate xStocks Trading Share#
3️⃣ If you share on Twitter, submit post link here: https://www.gate.com/questionnaire/6854
Note: You may submit the form multiple times. More posts, higher chances to win!
📅 End at: July 9, 16:00 UTC
Show off your trading on Gate Squ
The Bitter Religion: The Holy War Unfolding Around the Extension Law of Artificial Intelligence
In the field of artificial intelligence, the intersection of faith and technology has formed a heated debate around the validity and future development of the "law of expansion". This article explores the rise, divergence, and possible impact of this "bitter religion," revealing the complex relationship between faith and science. The AI community is locked in a doctrinal battle over its future and whether it will be large enough to create God. This article is from an article by Mario Gabriele and compiled by Block unicorn. (Synopsis: Musk's xAI completes $6 billion Series C financing, Huida, BlackRock, a16z... A number of industry celebrities participated) (Background supplement: Huida launches the humanoid robot computing platform "Jetson Thor" next year, the ChatGPT moment of physical AI? I would rather live my life like there is a God and wait until I die to find out that God does not exist, than live like there is no God, and wait until I die to find out that God exists. — Blaise Pascal Religion is an interesting thing. Maybe it's because it's completely unprovable in any direction, or maybe it's like one of my favorite quotes: "You can't fight feelings with facts." Religious beliefs are characterized by the fact that, in the process of faith rises, they accelerate at such an incredible rate that it is almost impossible to doubt the existence of God. How can you doubt a divine being when those around you believe more and more in it? When the world rearranges itself around a doctrine, where is the foothold of heresy? Where is there room for opposition when temples and cathedrals, laws and norms are arranged according to a new, unshakable gospel? When Abrahamic religion first emerged and spread across continents, or when Buddhism spread from India to all of Asia, the immense momentum of faith created a self-reinforcing cycle. As more people convert and complex theological systems and rituals are built around these beliefs, questioning these basic premises becomes increasingly difficult. In a sea of gullibility, it is not easy to become a heretic. Magnificent churches, intricate religious scriptures, and thriving monasteries all serve as physical evidence of divine existence. But the history of religion also tells us how easily such structures can collapse. As Christianity spread to Scandinavia, the ancient Nordic faith collapsed in just a few generations. The religious system of ancient Egypt lasted for thousands of years, eventually disappearing when new, more persistent beliefs arose and larger power structures emerged. Even within the same religion, we see dramatic divisions – the Reformation tearing apart Western Christianity, while the Great Schism led to a split between the Eastern and Western churches. These divisions often start with seemingly trivial doctrinal differences and gradually evolve into completely different belief systems. God is a metaphor that transcends all levels of intellectual thought. It's that simple. — Joseph Campbell Simply put, believing in God is religion. Perhaps the creation of God is no different. Since its inception, optimistic AI researchers have imagined their work as creationism—the creation of God. Over the past few years, the explosion of large language models (LLMs) has further strengthened believers' belief that we are on a sacred path. It also confirms a blog post written in 2019. Although people outside the field of artificial intelligence did not know about it until recently, Canadian computer scientist Richard Sutton's Bitter Lessons has become an increasingly important text in the community, evolving from secret knowledge to a new, all-encompassing religious foundation. In 1,113 words (every religion needs sacred numbers), Sutton sums up a technical observation: "The biggest lesson that can be learned from 70 years of AI research is that a universal approach to leveraging computation is ultimately the most effective and a huge advantage." Advances in AI models have benefited from an exponential increase in computing resources, riding huge waves of Moore's Law. At the same time, Sutton points out that much of the work in AI research has focused on optimizing performance through specialized techniques — adding human knowledge or narrow tools. While these optimizations may help in the short term, in Sutton's view, they are ultimately a waste of time and resources, like adjusting the fins of a surfboard or trying out new wax when a huge wave arrives. This is the basis of what we call the "religion of bitterness." It has only one commandment, often referred to in the community as the "extended trap rule": exponential rise computation drives performance; The rest is stupid. The religion of bitterness, which has expanded from large language models (LLMs) to world models, is now spreading rapidly through untransformed temples of biology, chemistry, and embodied intelligence (robotics and autonomous vehicles). However, as the Sutton doctrine spread, the definitions also began to change. This is the hallmark of all active and living religions – arguments, extensions, commentaries. The "extended trap rule" no longer just means extended trap calculations (the ark is not just a ship), it now refers to various methods aimed at improving transformer and computing performance, with some tricks. Now, the classics encompass attempts to optimize every part of the AI stack, from the techniques applied to the core models themselves (merging models, expert mixing (MoE), and knowledge refinement) all the way to generating synthetic data to feed these perpetually hungry gods, with a lot of experimentation in between. Warring sects Recently, a question that has swirled in the AI community, with an air of jihad, is whether the "bitter religion" is still correct. This week, Harvard, Stanford, and the Massachusetts Institute of Technology published a new paper called "The Extended Trap Rule of Precision," sparking the conflict. The paper discusses the end of efficiency gains in quantification techniques, a set of techniques that improve the performance of AI models and benefit the Open Source ecosystem. Tim Dettmers, a research scientist at the Allen Institute for Artificial Intelligence, outlined its importance in the post below, calling it "the most important paper in a long time." It represents a continuation of the heated dialogue of the past few weeks and reveals a noteworthy trend: the growing consolidation of the two religions. OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei belong to the same sect. Both confidently say that we will achieve general artificial intelligence (AGI) in the next 2-3 years or so. Both Altman and Amodei are arguably the two figures most dependent on the sanctity of the "bitter religion." All of their incentives tend to overpromise, creating maximum hype to accumulate capital in a game dominated almost entirely by economies of scale. If the extension trap rule is not "alpha and omega", beginning and last, beginning and end, then what do you need $22 billion for? Former OpenAI chief scientist Ilya Sutskever adheres to a different trap principle. He joins other researchers, including many from within OpenAI, based on recent leaks, that expansion traps are approaching the upper limit. The group believes that sustaining progress and bringing AGI into the real world necessitates new science and research. The Sutskever school rightly argues that the Altman idea of a continuous expansion trap is not economically feasible. As artificially ...