Bio

Shijie Wu is an AI research scientist and engineer at Bloomberg. His research interests are focused on large language models and multilingual natural language processing. He is one of the lead authors of the BloombergGPT white paper. He was a co-recipient of the Workshop on Representation Learning for NLP’s (RepL4NLP) “Best Long Paper” award in 2020, and an EACL “Honorable Mention, Best Short Paper” award in 2021. He co-organized the SIGMORPHON shared tasks in 2019 and 2020. Before joining Bloomberg, he received his Ph.D. from Johns Hopkins University in 2021.

Talk Title: BloombergGPT: A Large Language Model for Finance

Abstract: The use of NLP in the realm of financial technology is broad and complex, with applications ranging from sentiment analysis and named entity recognition to question answering. Large Language Models (LLMs) have been shown to be effective on a variety of tasks; however, no LLM specialized for the financial domain has been reported in literature. In this work, we present BloombergGPT, a 50 billion parameter language model that is trained on a wide range of financial data. We construct a 363 billion token dataset based on Bloomberg’s extensive data sources, perhaps the largest domain-specific dataset yet, augmented with 345 billion tokens from general purpose datasets. We validate BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite of internal benchmarks that most accurately reflect our intended usage. Our mixed dataset training leads to a model that outperforms existing models on financial tasks by significant margins without sacrificing performance on general LLM benchmarks. Additionally, we explain our modeling choices, training process, and evaluation methodology.