top of page

Welcoming our AI Overlords: Observations from a bond market practitioner testing OpenAI's chatGPT

Technological progress comes in cycles, where the underlying technological advances are usually amplified in popular opinion into even more extreme hype and deflation cycles.

Crypto is a case in point (and an explanation where it currently sits along the cycle is probably unnecessary), but one area that is again entering a new phase is Machine Learning and AI.

OpenAI just released its latest model, chatGPT, referring to itself as an “AI” and more precisely defined as “an autoregressive language model that uses deep learning to produce human-like text”.

First off, the quality of the text generated by chatGPT is astonishing. At first glance, it can seem indistinguishable from human output. It’s not perfect – and we’ll come on to that – but it’s certainly enough to set global newsrooms and social media ablaze over the last week.

So, as a DCM Fintech, we thought we would give it a spin, feed it with some bond market related problems and report our findings.

There is one big limitation to this experiment, which is that we didn’t feed any proprietary data into chatGPT to improve the results. This is all based on out-of-the-box capabilities.

The Good

For simple, factual questions that you’d currently direct to a search engine, chatGPT tends to provide relevant, eloquent and mostly accurate answers. Unlike a search engine, it synthesizes information from numerous sources and puts it into its own, natural-sounding words. It can also respond to feedback and further clarifications and is generally quite fun to work with.

For example, you can ask it about the GDP of the USA, then ask to further split out the answer by industry and then follow up with a question about which of those are growing the fastest.

Or you can ask it to explain the Taylor Rule, receive a comprehensive answer and ask follow up questions, such as “can you provide examples of when it would be appropriate to ignore the rule?”. Or ask about the implications of including a make-whole clause in a bond transaction.

It’s impressive stuff and commentators are already questioning what this might mean for Google’s future.

The Great

What really blew my mind though is chatGPT’s ability to make sense of fairly complex relationships to provide sensible and eloquent answers even for questions that you can’t immediately find answers to on Google.

For example, I asked “If the US central bank strikes a hawkish tone in its press conference, how would you expect cable to respond?” and received a coherent, plausible and fairly comprehensive response (full response in Appendix, Example 1).

The model clearly understood the use of jargon within context (“hawkish” and “cable”) and, fundamentally, gave a solid answer.

Another example I tried was “If I'm a hypothetical risk-averse investor comparing two bonds with an identical yield, but one of the bonds has a much higher duration than the other, which bond should I buy?

Again, chatGPT had no problem with either the question or the answer and duly pointed me towards the lower duration bond given its lower interest rate sensitivity (Example 2).

Taking it one step further, I tried a question that has managed to stump several new Analysts back in my DCM days: “How would you explain that corporate bond issuers sometimes issue bonds in other currencies, different to their home currency?

The response ticked the key boxes of diversification, financing opportunities, broader pool of investors, potential for lower costs and hedging of cash flows (Example 3). Wow!

The Not-so-Good

The biggest shortfall at the moment is that ChatGPT has a tendency to provide “plausible-sounding but incorrect or nonsensical answers” (as acknowledged by OpenAI themselves).

For example, when I asked it to explain the difference between a G-spread and an i-spread in the bond market (Example 4), I received a sensible-sounding answer that was mostly correct, but conflated the G-spread and OAS spread in the explanation.

Looking beyond that – and this is being fairly critical – the grammar and structure of the answers does become repetitive (or shall we say, recognisable) after a while. It’s more like how a child might write an essay, adhering to strict instructions of which key aspects to cover, rather than a real explanation from a human.

I won’t go into a debate about the existential risks of AI here, but at the very least it seems inevitable that even the current, freely accessible, chatGPT API will lead to a huge increase in digital “noise”. It can easily produce phishing emails that are significantly more convincing that what you find in your spam folder on a daily basis; it can produce credible social media posts; blog articles; (fake) news articles; product descriptions; etc.

There’s already way too much low-quality content out there and this is only going to increase.

Where do we go from here?

It’s really impressive to see how quickly chatGPT has progressed to an almost human-like level of interaction.

There is a lot more than chat under the covers, for example the ability to write code, to identify bugs in existing code, or to work with proprietary datasets.

Although machine learning and AI isn’t part of our core value proposition at Bots, there are some enticing use cases in DCM and fixed income markets in general. We deal with large and well-structured datasets and machine learning can certainly be useful to pinpoint trends, answer market-related or factual questions, summarise issuance or transaction stats, answer questions in a support chat – all the way to writing market updates and predicting pricing.

Could chatAPI generate a plausible market update blurb to go with a weekly client update? Or synthesise the latest developments in a particular market? With the right input data, in can already do that today.

Heck, it can even do it in rap format if you prefer (Example 6): "Write a rap, Eminem-style, about the demise of human civilisation as a result of the AI singularity"

But, it remains constrained by the pattern-seeking and pattern-reinforcing behaviour that is evident when you work with chatGPT for more than a few minutes. And who would want to read a market update that ticks all the obvious points (with occasional factual mistakes) but misses the deeper insight that is meant to provide value for the reader?

In my view, there will always be a need to have a “human in the loop” in these kinds of human-to-human interactions. But it’s also clear that chatGPT, or machine learning and AI more generally, will transform the repetitive aspects of many jobs.

This will go significantly beyond what current workflow software is already able to achieve. The capital markets industry still has a lot of catching up to do on just the basics. As we’ve been preaching all along: those with access to the best data will come out as the winners.

Oh and by the way, the image that goes with this article was created by DALL-E 2, an AI system that can produce realistic images and art from a description in natural language.

The instructions were: “a pencil and watercolour painting of two bankers walking towards the sunset in a cityscape setting like Canary Wharf”


Judge for yourself. Excerpts of conversations with OpenAI’s chatGPT

Example 1

Example 2

Example 3

Example 4

Example 5

Example 6

Post: Blog2_Post
bottom of page