The aforementioned tool provides users with the capability to access and comprehend the source code of a particular contract address through the assistance of an artificial intelligence prompt.
Etherscan, the Ethereum block explorer and analytics platform, recently introduced a novel tool called "Code Reader" on June 19. This tool employs artificial intelligence to retrieve and interpret the source code of a specific contract address. Upon receiving a prompt from the user, Code Reader generates a response using OpenAI's large language model, thereby furnishing valuable insights into the contract's source code files. The tutorial page of the tool states as such:
“To use the tool, you need a valid OpenAI API Key and sufficient OpenAI usage limits. This tool does not store your API keys.”
Code Reader's potential applications include acquiring a more profound comprehension of a contract's code through AI-generated explanations, obtaining comprehensive lists of smart contract functions associated with Ethereum data, and comprehending how the underlying contract interacts with decentralized applications. The tutorial page of the tool elaborates that upon retrieving the contract files, users can select a specific source code file to peruse. Furthermore, the source code can be modified directly within the UI before being shared with the AI.
In the midst of an AI boom, certain experts have raised concerns about the current feasibility of AI models. A report recently published by Singaporean venture capital firm Foresight Ventures states that "computing power resources will be the next big battlefield for the coming decade." However, despite the increasing demand for training large AI models using decentralized distributed computing power networks, researchers have identified significant limitations that current prototypes faces, such as complex data synchronization, network optimization, and concerns surrounding data privacy and security.
As an illustration, the aforementioned Foresight researchers highlighted that training a large model with 175 billion parameters utilizing single-precision floating-point representation would necessitate approximately 700 gigabytes. Nevertheless, distributed training mandates the frequent transmission and updating of these parameters between computing nodes. In the event of 100 computing nodes, and each node requiring updates for all parameters at every unit step, the model would require the transmission of 70 terabytes of data per second, which surpasses the capacity of most networks by a significant margin. The researchers concluded that:
“In most scenarios, small AI models are still a more feasible choice, and should not be overlooked too early in the tide of FOMO [fear of missing out] on large models.
Giancarlo Stanton's 400th career home run in his 1,520th game sparked a Yankees' 5-1 victory.
Read moreSaudi Arabia is set to host the most lucrative horse race globally, spanning two days of thrilling equestrian action that reaches its pinnacle with th...
Read moreEarning him the nickname "Japanese Steph Curry" for his sharpshooting skills and infectious energy. Despite a 14-hour time difference, fans in Japan e...
Read moreJoin our subscribers list to get latest news and updates about our promos delivered directly to your inbox.