Crypto analytics firm Elliptic is integrating artificial intelligence into its toolkit for tracking blockchain transactions and handling risk detection.
Using OpenAI’s ChatGPT chatbot, the company says that it will be able to organize data faster and in greater amounts. However, it has implemented certain use limitations and does not utilize ChatGPT plug-ins, either.
“As an organization trusted by the world’s largest banks, regulators, financial institutions, governments, and law enforcers, it’s important to keep our intelligence and data secure,” an Elliptic spokesperson told Decrypt. “That’s why we don’t use ChatGPT to create or modify data, search for intelligence, or monitor transactions.”
We’re very pleased to announce the integration of ChatGPT into our off-chain intelligence and research gathering efforts.
Read the full press release herehttps://t.co/YV5BYPCjW2#knowexactly #ChatGPT #blockchain — Elliptic (@elliptic) June 2, 2023
Launched in 2013, Elliptic provides institutions and law enforcement with blockchain analytics research for tracking cybercriminals and regulatory compliance related to cryptocurrency. In May, for example, Elliptic reported several Chinese businesses selling the chemicals to make fentanyl accepted cryptocurrencies, including Bitcoin. U.S. Senator Elizabeth Warren used the report to again call for stricter regulations on cryptocurrencies.
Elliptic will use ChatGPT to complement its human-based data collection and organization processes to augment its team’s efforts, which it says will allow it to double down on accuracy and scalability. At the same time, the large language models (LLM) organize the data.
“Our employees leverage ChatGPT to enhance our datasets and insights,” the spokesperson said. “We follow and adhere to an AI usage policy and have a robust model validation framework.”
Because Elliptic is not using ChatGPT to generate information, the company said that it is not concerned about AI “hallucinations” or false information. AI hallucinations refer to instances in which an AI generates unexpected or untrue results not backed by real-world data.
AI chatbots like ChatGPT have come under increased scrutiny for convincingly providing falsified information about people, places, and events. OpenAI has stepped up efforts using mathematics to address these so-called hallucinations in training its models, calling it a critical step towards building aligned artificial general intelligence (AGI).
“Our customers come to us to know exactly their risk exposure,” Elliptic CTO Jackson Hull said in a statement. “Integrating ChatGPT allows us to scale up our intelligence, giving our customers a view on risk they can’t get anywhere else.”
GPT’s reaction to this article:
As an AI language model, I do not have personal opinions, but I can provide a summary of the article. The article reports that blockchain analytics firm Elliptic is integrating OpenAI’s ChatGPT chatbot into its toolkit for tracking blockchain transactions and handling risk detection. The company claims that this integration will allow it to organize data faster and in greater amounts, which will help it to double down on accuracy and scalability. Elliptic has implemented certain use limitations to ensure the security of its intelligence and data. The article also discusses the potential risks associated with AI chatbots providing falsified information and how OpenAI is addressing these concerns.