Search Immortality Topics:



Machine Learning Is Cheaper But Worse Than Humans at Fund Analysis – Institutional Investor

Posted: October 7, 2020 at 7:50 am

Morningstar had a problem.

Or rather, its millions of users did: The star-rating system, which drives huge volumes of assets, is inherently backwards-looking. These make-or-break badges label how good (or bad) a fund has performed, not how it will perform.

Morningstars solution was analysts: humans who dig deep into the big and popular fund products, then assign them forward-looking ratings. For analyzing the lesser or niche products, Morningstar unleashed the algorithms.

But the humans still have an edge, academic researchers found except in productivity.

We find that the analyst report, which is usually 4 or 5 pages, provides very detailed information, and is better than a star rating, as it claims to be, said Si Cheng, an assistant finance professor at the Chinese University of Hong Kong, in an interview. She and her co-authors of a just-published study also found that the forward-looking algorithmic analysis doesnt do as much as an analyst rating. If we look at very similar funds rated by human and machine, theyre quite different even though you have two-forward looking ratings.

[II Deep Dive: AQRs Problem With Machine Learning: Cats Morph Into Dogs]

The most potent value in all of these Morningstar modes came from the tone of human-generated reports assessed using machine-driven textual analysis.

Tone is likely to come from soft information, such as what the analyst picks up from speaking to fund management and investors. That deeply human sense of enthusiasm or pessimism matters when it comes through in conflict with the actual rating, which the analysts and algos based on quantitative factors.

Most of Morningstars users are retail investors, but only professionals are tapping into this human-quant arbitrage, discovered Cheng and her Peking University co-authors Ruichang Lu and Xiajun Zhang.

We do find that only institutional investors are taking advantage of analysts reports, she told Institutional Investor Tuesday. They do withdraw from a fund if the fund gets a gold rating but a pessimistic tone.

Cheng, her coauthors, and other academic researchers working in the same vein highlight cost one major advantage of algorithmic analysis over the old-fashioned kind. After initial set up, they automatically generate all of the analysis at a frequency that a human cannot replicate, Cheng said.

As Anne Tucker, director of the legal analytics and innovation initiative at Georgia State University, cogently put it, machine learning is leveraging components of human judgement at scale. Its not a replacement; its a tool for increasing the scale and the speed. On the legal side, almost all of our data is locked in text: memos, regulatory filings, orders, court decisions, and the like.

Tucker has teamed up with GSU analytics professor Yusen Xia and associate law professor Susan Navarro Smelcer to gather the text of fund filings and turn machine-learning programs onto them, searching for patterns and indicators of future risk and performance. The project is underway, and detailed in a recent working paper.

We have complied all of the investment strategy and risk sections from 2010 onwards, and are using text mining, machine learning, a suite of other computational tools to understand the content, study compliance, and then to aggregate texts in order to model emerging risks, Tucker told II. If we listen to the most sophisticated investors collectively, what can we learn? If we would have had these tools before 2008, would we have been able to pick up tremors?

Maybe but they wouldnt have picked up the Covid-19 crisis, early findings suggest.

There were essentially no pandemic-related risk disclosures before this happened, Tucker said.

See the rest here:
Machine Learning Is Cheaper But Worse Than Humans at Fund Analysis - Institutional Investor

Recommendation and review posted by Ashlie Lopez