With the proliferation of open-source large models, "unrestricted AI tools" like WormGPT and FraudGPT are being misused to generate phishing emails, write malicious contracts, and manipulate user communities, posing a serious threat to the security of the encryption industry. This article details the principles, uses, and countermeasures of these models, sounding the alarm for Web3 practitioners.