top of page
Oliver Corstjens

The case for more DCM data


Digitise, digitalise, datafy…


There is a difference between the terms digitising and digitalising. Digitising is converting data to digital form, making it processable by a computer. Digitalising goes a step further, using IT to change processes. With regards to the DCM workflow, it’s almost completely digitised, and partially digitalised (e.g. via excel). At Bots, our purpose is to push for more digitalisation of the DCM workflow, but we have another aim too, for which the term digitalising doesn’t do justice.


The more accurate term, for this other aim, is to “datafy” the DCM workflow. Currently, new issue pricing follows digital processes but no dataset is created, at least not one which you can easily manipulate (see our blog post: Why most DCM teams are data rich but information poor). With Bots, your new issue pricing is stored in a dedicated database, which you can query easily.


That’s not to say that proper DCM datasets don’t exist, of course they do. For example, there are multiple data providers offering comprehensive datasets for new issue deal stats. But there are still many gaps, none more so than the data which is created every day by DCM and syndicate bankers.


The case for more DCM data…


I’m not going to upset anyone by saying that data availability is a key (and growing) pillar of any DCM originator’s pitching strategy. What’s more, with the tone of markets at present, DCM bankers need to make tough decisions about whether to recommend launching a deal, at what price, and are against the clock when making them. Having the best data to support your decision is incredibly valuable when trying to avoid bad calls.


“But we have the data we need already”, I hear you say.


I take that point, but would argue more is more when it comes to data. New datasets will allow you to make your pitch fresher and more compelling. Imagine, instead of presenting an issuer that their pricing has widened by 5bp, presenting them with a view of how that change compares with all their peers and how the cross currency relative value now stacks up compared to last week, month and year.


Also, once enough DCM professionals have “datafied” enough of their workflow, you can legitimately claim to be dealing with Big Data datasets, meaning you no longer have to settle for samples and assumptions. At that stage, you can start asking questions such as “what is our rate of mandates for my clients I send regular pricing to versus ones I don’t, and what was it last year?”.


When you have the entire universe of observations, you gain a potentially invaluable source of insights. In 2004 for example, Google found that its users’ searches for flu symptoms were able to predict the spread of flu across the US far quicker and more accurately than any other measure. Ask yourself what insights can be unlocked when you create an entirely new Big Data dataset of activity in your DCM team?


Let’s get datafying DCM!

bottom of page