Gate動態廣場創作者星火招募計劃限時開啓!
無論你是加密KOL、項目方還是媒體,現在入駐發帖,即可搶先解鎖:Gate百萬流量曝光、超$10,000月度激勵獎金、深度生態合作資源等專屬福利!
參與攻略:
1️⃣ 立即填寫申請表:https://docs.google.com/forms/d/e/1FAIpQLSdStzGVu-fj14EM07-cVLYf0pk0axwMQbS-0tk1YK3Y6x5Vew/viewform?usp=header
2️⃣ 審核通過,開啓創作之旅!
🎁 活動獎金獎勵:
基礎創作激勵:參與動態內容挖礦,輕鬆發帖即可獲得高達10%手續費返佣!
$10,000/月內容激勵池:每週精選TOP 10優質內容,入選即獎勵 $50 USDT/篇!
爆款內容獎勵:閱讀量≥3萬可獲得額外 $50 USDT,閱讀量≥10萬可獲得額外 $100 USDT!
🎁 生態合作&流量曝光權益:
空投/AMA/白名單等項目合作優先權
參與Gate Labs早期項目評測,贏潛在種子輪獎勵
APP首頁推薦位、官方宣發、峯會嘉賓等資源加持
本活動長期開放,首批入駐動態廣場用戶權益優先!
立即申請,搶佔流量紅利,輕鬆發帖賺高額獎勵,項目方更有生態扶持等你來享!
Lummis’ RISE Act is ‘timely and needed’ but short on details
Civil liability law doesn’t often make for great dinner-party conversation, but it can have an immense impact on the way emerging technologies like artificial intelligence evolve.
If badly drawn, liability rules can create barriers to future innovation by exposing entrepreneurs — in this case, AI developers — to unnecessary legal risks. Or so argues US Senator Cynthia Lummis, who last week introduced the Responsible Innovation and Safe Expertise (RISE) Act of 2025.
This bill seeks to protect AI developers from being sued in a civil court of law so that physicians, attorneys, engineers and other professionals “can understand what the AI can and cannot do before relying on it.”
Early reactions to the RISE Act from sources contacted by Cointelegraph were mostly positive, though some criticized the bill’s limited scope, its deficiencies with regard to transparency standards and questioned offering AI developers a liability shield.
Most characterized RISE as a work in progress, not a finished document.
Is the RISE Act a “giveaway” to AI developers?
According to Hamid Ekbia, professor at Syracuse University’s Maxwell School of Citizenship and Public Affairs, the Lummis bill is “timely and needed.” (Lummis called it the nation’s “first targeted liability reform legislation for professional-grade AI.”)
But the bill tilts the balance too far in favor of AI developers, Ekbia told Cointelegraph. The RISE Act requires them to publicly disclose model specifications so professionals can make informed decisions about the AI tools they choose to utilize, but:
Not surprisingly, some were quick to jump on the Lummis bill as a “giveaway” to AI companies. The Democratic Underground, which describes itself as a “left of center political community,” noted in one of its forums that “AI companies don’t want to be sued for their tools’ failures, and this bill, if passed, will accomplish that.”
Not all agree. “I wouldn’t go so far as to call the bill a ‘giveaway’ to AI companies,” Felix Shipkevich, principal at Shipkevich Attorneys at Law, told Cointelegraph
The RISE Act’s proposed immunity provision appears aimed at shielding developers from strict liability for the unpredictable behavior of large language models, Shipkevich explained, particularly when there’s no negligence or intent to cause harm. From a legal perspective, that’s a rational approach. He added:
The scope of the proposed legislation is fairly narrow. It focuses largely on scenarios in which professionals are using AI tools while dealing with their customers or patients. A financial adviser could use an AI tool to help develop an investment strategy for an investor, for instance, or a radiologist could use an AI software program to help interpret an X-ray.
Related: Senate passes GENIUS stablecoin bill amid concerns over systemic risk
The RISE Act doesn’t really address cases in which there is no professional intermediary between the AI developer and the end-user, as when chatbots are used as digital companions for minors
Such a civil liability case arose recently in Florida, where a teenager committed suicide after engaging for months with an AI chatbot. The deceased’s family said the software was designed in a way that was not reasonably safe for minors. “Who should be held responsible for the loss of life?” asked Ekbia. Such cases are not addressed in the proposed Senate legislation
“There is a need for clear and unified standards so that users, developers and all stakeholders understand the rules of the road and their legal obligations,” Ryan Abbott, professor of law and health sciences at the University of Surrey School of Law, told Cointelegraph.
But it’s difficult because AI can create new kinds of potential harms, given the technology’s complexity, opacity and autonomy. The healthcare arena is going to be particularly challenging in terms of civil liability, according to Abbott, who holds both medical and law degrees.
For example, physicians have outperformed AI software in medical diagnoses historically, but more recently, evidence is emerging that in certain areas of medical practice, a human-in-the-loop “actually achieves worse outcomes than letting the AI do all the work,” Abbott explained. “This raises all sorts of interesting liability issues.”
Who will pay compensation if a grievous medical error is made when a physician is no longer in the loop? Will malpractice insurance cover it? Maybe not.
The AI Futures Project, a nonprofit research organization, has tentatively endorsed the bill (it was consulted as the bill was being drafted). But executive director Daniel Kokotajlo said that the transparency disclosures demanded of AI developers come up short.
“The public deserves to know what goals, values, agendas, biases, instructions, etc., companies are attempting to give to powerful AI systems.” This bill does not require such transparency and thus does not go far enough, Kokotajlo said.
Also, “companies can always choose to accept liability instead of being transparent, so whenever a company wants to do something that the public or regulators wouldn’t like, they can simply opt out,” said Kokotajlo.
The EU’s “rights-based” approach
How does the RISE Act compare with liability provisions in the EU’s AI Act of 2023, the first comprehensive regulation on AI by a major regulator?
The EU’s AI liability stance has been in flux. An EU AI liability directive was first conceived in 2022, but it was withdrawn in February 2025, some say as a result of AI industry lobbying.
Still, EU law generally adopts a human rights-based framework. As noted in a recent UCLA Law Review article, a rights-based approach “emphasizes the empowerment of individuals,” especially end-users like patients, consumers or clients.
A risk-based approach, like that in the Lummis bill, by contrast, builds on processes, documentation and assessment tools. It would focus more on bias detection and mitigation, for instance, rather than providing affected people with concrete rights
When Cointelegraph asked Kokotajlo whether a “risk-based” or “rules-based” approach to civil liability was more appropriate for the US, he answered, “I think the focus should be risk-based and focused on those who create and deploy the tech.”
Related: Crypto users vulnerable as Trump dismantles consumer watchdog
The EU takes a more proactive approach to such matters generally, added Shipkevich. “Their laws require AI developers to show upfront that they are following safety and transparency rules.”
Clear standards are needed
The Lummis bill will probably require some modifications before it is enacted into law (if ever).
“I view the RISE Act positively as long as this proposed legislation is seen as a starting point,” said Shipkevich. “It’s reasonable, after all, to provide some protection to developers who are not acting negligently and have no control over how their models are used downstream.” He added:
According to Justin Bullock, vice president of policy at Americans for Responsible Innovation (ARI), “The RISE Act puts forward some strong ideas, including federal transparency guidance, a safe harbor with limited scope and clear rules around liability for professional adopters of AI,” though the ARI has not endorsed the legislation.
But Bullock, too, had concerns about transparency and disclosures — i.e., ensuring that required transparency evaluations are effective. He told Cointelegraph:
Still, all in all, the Lummis bill “is a constructive first step in the conversation over what federal AI transparency requirements should look like,” said Bullock.
Assuming the legislation is passed and signed into law, it would take effect on Dec. 1, 2025.
Magazine: Bitcoin’s invisible tug-of-war between suits and cypherpunks