Artificial intelligence is already mainstream in financial technology, powering everything from budgeting apps to customer service chatbots. But can AI provide smart, personalized financial advice like a financial advisor?
The reality is closer than you think, I said. Professor of Finance at MIT Sloan and Director of the MIT Institute for Financial Engineering. Lo recently released an update on research he is conducting that reveals whether generative AI has the potential to provide sound financial advice that is unique to each individual.
Many people seeking bespoke financial advice meet with a trusted financial advisor. Could the large-scale language models that are the basis of AI systems like GPT-4 step in to replace them? Mr. Lowe will seek a deeper understanding of the role of LLMs in providing financial advice is working on his three-part project with graduate students Gillian Ross and Nina Gersberg. Lo presented her research results to date at her 2024 MIT AI Conference, hosted by the MIT Industrial Liaison Program.
“In my opinion, financial advice is an ideal guinea pig because the stakes are so high,” Lo said. “There's a lot of money in this field, a lot of financial advisors, and a lot of problems arise from getting the wrong financial advice.”
Here's what researchers have learned so far by asking large language models to perform specific tasks.
Large language models have the insight to provide financial advice, but it requires the addition of supplementary modules.
LLMs are already used to provide financial advice. The key question is whether the advice is appropriate, that is, whether it reflects the domain-specific knowledge that humans demonstrate when passing the CFA exam or earning other certifications, Lo said. said.
Lo said the team's research to date has shown that AI can perform fairly well in this regard, provided supplementary modules that incorporate finance-specific knowledge are added.
“Preliminary analysis suggests that relatively lightweight modules that don't have a lot of data or analysis can actually be used to generate domain-specific knowledge that is inherited across large language models,” Lo said. states.
Without the module, ChatGPT “doesn't quite pass, but it's close,” Lo says. “It's actually surprisingly close.” But still, Lo says that supplementary modules or his finance-specific LLM will still be needed to navigate the sector's complex legal, ethical and regulatory landscape. We expect this to become clear through ongoing research.
AI has the potential to personalize the tone and content of financial advice.
Many large-scale language models, such as ChatGPT 4.0, are aimed at individuals with at least a college-level education, but Lo and his research team are wondering if they can be brought down to the high school level. I'm currently considering whether to do so. Considering the purpose, that would be ideal. ”
As LLMs develop the ability to “talk” to both older retirees without high school degrees and professional regulators, they will not only be able to answer questions satisfactorily, but, like human financial advisors, You will be able to respond in an empathetic tone.
Related article
The large-scale language model adopts a neutral or slightly positive tone out of the box, but Lo believes this too can be personalized to help foster relationships with clients. This may increase the likelihood that clients will follow that language model. LLM Financial Advice.
Low's thesis research builds on his previous work on why some investors “panic” and exit the stock market after losing large sums of money. “If the client is neutral, you should adopt a neutral tone,” he said. “If the client is a little positive, adopt a positive tone.”
However, if a client exhibits extreme optimism or pessimism, advisors should adopt an opposing tone with the idea of finding a middle ground.
“When we have extreme reactions, that's when we need this kind of counter-reinforcement to mollify the individual,” Roe said. “When investors are excited, you want to bring them back down to Earth and prevent them from taking any extreme investment actions.”
Generative AI has the potential to behave ethically, but bias remains a concern.
The final point Law considered was what he considered the “complex question” of whether generative AI can be trusted. Can they uphold their fiduciary duty to engage in ethical financial behavior, as is expected of human advisors?
“That's a whole can of worms,” Law said. “Some people argue that financial ethics are contradictory, but I disagree with that. We have to talk about the concept of fiduciary responsibility.”
Related article
To understand more specifically what “working in the best interests of investors” actually means for LLMs, Lo and his research team compiled domain-specific data from external knowledge bases. We focused on search augmentation generation, known as RAG, to obtain . They created his RAG, which consists of money lawsuits filed between one party and the other, as a way to train technology on how to violate ethical behavior.
“When we apply this setting to large-scale language models, we find that ChatGPT 4.0 is relatively fair, but other large-scale language models have biases, and large-scale language models exhibit gender bias. There are many other biases as well, including,''' Lo said.
“Training data can be obtained from all corners of the Internet, and it contains large amounts of biased and harmful content. LLMs, when trained with is clearly an undesired result. ”
The findings could have implications for other industries as well.
Mr Lo said the study of the application and usefulness of LLM in finance can be applied to various fields such as medical, accounting and legal fields.
“Focusing on domain-specific applications is a very useful way to better understand some of the theoretical challenges to generative AI and general intelligence,” Lo says.
Read next: How AI can help acquired companies grow