Transcript: Emily Wright podcast

Emily has an extensive track record of managing, building, and transforming employee compliance surveillance programs in financial institutions across a wide range of business areas, from investment banking, financial markets, and broking to private banking and wealth management.

The following is a transcript of the podcast episode Emily Wright on trade surveillance, ethics and conduct between GRIP senior reporter Carmen Cracknell and author Emily Wright.

[INTRO]

Carmen Cracknell: Thank you, Emily, for joining us on the GRIP Podcast.

Emily Wright: Thank you for having me. It’s a pleasure to be here.

Carmen Cracknell: Could you start by telling us a bit about your professional background?

Emily Wright: Of course, yeah. So I headed straight from university into the City, literally into the square mile, sort of bypassed the milk round. But anyway, I ended up headhunting for the first five or six years of my career because Arthur Anderson had been my milk round target. And of course, to age myself, they went bust while I was finishing my degree.

So the whole Enron scandal started. So I went into to search, to executive search, not because it had necessarily attracted me, but I didn’t have a plan B. And what was really interesting was, although it wasn’t a career I wanted to stick with, it taught me an awful lot about the City, which I hadn’t been overly interested in. I’d read philosophy at university.

I had talked to all the financial institutions and all the consulting institutions as you do, but didn’t really have a passion for finance. And I think what I found the privilege of executive search is you sit with incredibly senior people who are very capable and passionate about their industry.

And they talked to you about some really fantastic things, the restructurings they’re doing, the talent they need, the direction that they’re taking their products. And so you have this sideline to what’s exciting in the City. And so I got an interest in the City. I never developed an interest in recruitment or search. And so after five years in that space, I actually went to work for a client.

So I joined Lehman Brothers and into their Diversity and Inclusion team. It was an interesting overlap from where I’d come from. They were looking to build up particularly their MD female representation. So an executive search background made sense to segue into diversity and inclusion.

And having found myself in for the first time a publicly owned company with shareholders, not a private practice, I just loved it. I was never going to change working for a publicly owned institution again, I didn’t think.

And working for a much bigger organization, it was great. And HR was fantastic. It really suited me for a long time. There was a lot of strategic discussions around the diversity and inclusion program at Lehman Brothers at the time, because the US had a very well established program.

And we were building that out across EMEA and partnering with APAC to really make sure that those regions had the relevant aspects of the program that were needed.

They obviously don’t exactly reflect the US. So that was wonderful. I could have imagined staying at Lehman Brothers a very long time. I enjoyed it a great deal, but that was not a choice I got to make. So in 2008, in September,They filed for Chapter 11, and that was the end of that employment.

And from there, I went with Lehman people over to do the Fimat Calyon merger, which became New Edge. And so that was about a year of putting together two legacy organizations with very different cultures, slightly different client lists and businesses, and building an integrated brokerage firm that was independent. And that was great. That was four or five years working across a huge range of projects, which is what comes up when you work on a merger.

So reporting into the head of HR and communications, but really in a French organization, that meant we had a lot of contact with the regulator. And that was the point at which I started talking directly to regulators, running a relationship with the regulator, but also working closely with compliance. So in the context of New Edge, we were the ones who had the Smarsh tool and did a lot of the eComms surveillance.

We had the regulatory contact. We ran the training platform. So all of the mandatory compliance training that was rolling out, I ended up very almost accidentally involved in compliance from that point onwards.

And then my very next role from there, I moved to ICAP specifically to move from London to Asia for lots of personal reasons. It was just a little bit closer to the far side of the world where all of my family had decided to relocate back to.

And so I took a role based in Hong Kong doing core compliance for the region and rolling out monitoring and surveillance for the region and taking on operational risks. So it was a big step change for me in some respects.

Again, I was working for a former Lehman person. And I think this is where the Lehman network, that culture at Lehman Brothers was that you took people who were capable and put them into different roles. And so that was an opportunity to really get to the grips of the second line because I was across all of it.

I had some risk, I had compliance, and I picked up monitoring and surveillance. And the interesting thing about ICAP at the time, although culturally, it was significantly different from the kind of brokerage firm that New Edge was, it was the only non-bank that had been captured by LIBOR. And we were still undertaking LIBOR investigations and indeed the Asia PAC region had been implicated at ICAP. And so we had genuine live cases running.

So that was really where I cut my teeth on LIBOR and picked up a surveillance program and rolled out the first RCSA’s and a number of other really what would have been considered basic things perhaps in a London context, but weren’t necessarily basic back then at that jurisdiction. So 16 jurisdictions across Asia.

From there, I went to JP Morgan and ran the regional monitoring and surveillance program along with a few other programs, but that was really 90% of my remit. And that included wealth management, private banking, all of the markets, all the information barriers and across all the jurisdictions for Asia. And then following that, I took on the global role at Standard Chartered Bank.

And although I took that in Hong Kong, that did come with a move shortly after joining to Singapore. So that then put me into Singapore to run the global program from there. And again, that was sort of 63 jurisdictions, including wealth and private banking, as well as markets and investment banking and across, you know, for all of our sins that we do in surveillance, whether it was trade or eComms or voice comms the whole gamut.

Following that, we did the COVID years in Singapore. So that was, I know there was nowhere good to do COVID, but it was definitely a decision proactively to leave the location after COVID. It had been a really tough lockdown period there and very isolating. And I obviously for expats with newly arrived in Singapore as we went into lockdown and then not able to leave the island for a significant period. So with a number of personal changes, I really needed to move back to Australia at that point.

So I decided to become self-employed and I’ve been working as a consultant for a number of clients and I’ve picked up my executive coaching again, which is something that I started doing at New Edge a very long time ago and have probably only in a very relaxed way employed through all of my roles. Now I’m doing it as much more of a focus, which is great fun.

Carmen Cracknell: Wow. So yeah, that is a huge amount of experience and such a big switch from HR and executive search. But when you explain it, it kind of makes sense how you got into more of a regulation area.

Your book that you’ve just released, Behind the Screens: Understanding Employee Surveillance in Financial Services, that’s quite a seemingly niche cybersecurity focused work. So what prompted you to write this book? Why now and why this particular niche that you focused on?

Emily Wright: Yeah, sure. So why the niche? It’s really driven from 12 years of working in employee compliance surveillance and it is quite niche, but I’ve done that at three different organizations and I now have worked with a number of other clients externally. And so I’ve seen it from the vantage point of three very different organizations with very different footprints as an internal practitioner and the challenges that it brings up and talking with a huge number of different regulators during that time as well. But then also having stepped out and now working with clients who need support around those programs and enhancements have seen it from a different vantage point again.

And so I think generally across the industry, this is a function that probably could be done better. I don’t want to say anything negative about it. There’s a lot of great practitioners doing really great work, but there are some really entrenched challenges to doing good surveillance in financial services. And I think there’s a real potential for this function to deliver far more than it’s delivering. So part of the motive for the book was just to share that.

I think why now, really it’s circumstance, leaving the employment of an institution and becoming self-employed does provide a certain level of freedom as well as time, not only to write a book, but to be able to share your own opinions as an individual rather than representing a brand or a particular institution.

So it was really a case of not only had I done this for 12 years, but I’ve also got that kind of freedom and flexibility at the moment.

Carmen Cracknell: That makes sense. So in the book, you talk about three particular forms of surveillance in financial services. For anyone who kind of doesn’t know the ins and outs of that, can you talk about the, just give an overview maybe of the distinction between the three and the challenges of each?

Emily Wright: Sure. Yeah. So the three really around trade surveillance, written communication surveillance, and voice or audio surveillance.

What trade surveillance does is it’s looking at highly structured data from the transactions that banks undertake. And this is really what we’re seeing in wholesale banking mostly and literally looking at market trades that are taking place. And doing a surveillance over those is done usually with algorithms. And we can kind of go into, if you’d like to, some more detail around it, but there’s a couple of different ways of doing it in terms of strategies. The challenges, and you can’t underestimate them, we can come to cases later, but banks genuinely do get fined for not doing this well.

So it’s not an easy thing to do. A lot of very big institutions still don’t do it to the standard a regulator would expect. The challenges stem partly from quality of data, partly from the technology available. And the vast majority of technology that is used is external to these institutions. So there’s vendors that provide surveillance technology. So the banks aren’t always in control necessarily of how good a surveillance tool is. They go to market and find the best that’s available. The completeness of coverage is definitely an issue. And how does that come up?

Well, if we take the topic of data, there are some areas where the data is quite good. Exchange traded products are very mature and have got very good data sets, but you’ve got places like fixed income products where it may not be the case. But you’ve also got a big topic of conversation at the moment around related instrument risk and how you detect for that. And what that really requires is not only great data sets on exchange traded and OTC products, but also how you put those together to see where you’ve got risk on one relating to risk on another.

And so it’s that kind of complexity that creates challenge around trade surveillance. In written communications, and this makes a lot of sense, I’m sure for Global Relay. You guys live and breathe this. We’ve seen all sorts of different types of fines leveled here. And perhaps the WhatsApp fines have less to do with actual surveillance than they do books and records. But I think they’ve given everyone a sense of the fact that surveillance is performed in businesses over the messages that people send as part of doing their business. And that’s the surveillance piece that would be eComms.

The challenges there again, we can talk to both data and technology as umbrellas, but they manifest differently. So in eComms you’ve just had this huge proliferation of channels that all come in a variety of formats and therefore different types of data that have got to be ingested and surveilled.

You’ve got a multitude of languages, but it’s not just languages as we think of language. It’s not just like you’ve got English and French and Chinese. You’ve also got all of these other ways that people communicate in writing. So you can blend languages, there’s code words, the joy of the emojis, which we haven’t overcome yet. All of those things sit in written communications and you have to work out a way to understand them and surveil them.

There’s also just a volume question. And it’s not that in trade there isn’t a lot of volume, but if you think about how much communication can take place on all of these different channels between everyone in the market, it’s a phenomenal volume. So you tend to have very, very high alerts here. There’s sort of two main strategies and they have not necessarily been eliminated. So one hasn’t eliminated the other.

There’s the original strategy in eComms, which is around lexicons, and then you’ve got AI stepping in in the form of NLP models. They are still pretty much running alongside each other in almost all institutions. And the reason is, is that lexicons still hit positive hits, that AI models are known not to hit. However, deploying lexicons alone has a lot of challenges around multiple languages, which we get again, we could come back to, but it also has high volumes of alerts. And so you’ve got this proliferation of false positives. And so it’s an area that’s not nearly as well developed as trade surveillance. It’s definitely come in a bit later and the technology is catching up. But you’ve still got these two strategies that are being run in parallel in most institutions.

And then finally, you come to voice comms, which is communication over either over audio files or taking audio files and converting them to detect risk in those conversations. And you’ve got some similar challenges to eComms here. So the proliferation of channels, but also certain channels here don’t have good quality data. And an example is when you record a mobile telephone, you’ve often got a lot of background noise or a poor quality reception. When you’ve got boomboxes and speaker boxes on desks, you’ve got tons of people shouting in incredibly noisy environments at times and doing detection in that kind of context. When you’ve also got people potentially using whispering, shouting, perhaps nonstandard language.

If we think that it’s difficult to work out what an emoji means, there’s also an awful lot of jargon on a trading floor that can be just as difficult to understand. So those data sets are messy. This is probably the least developed technology as well. This is the newest area of surveillance that’s been deployed. And so the technology is really still developing. And I think although some groups are still using phonic strategies, there’s a lot of anticipation across the industry at the moment around voice to text and where we’re going to be 12 to 18 months from now on voice to text technology. And what that simply means is taking an audio file and converting it into text and then you would treat it the way you would treat any communications file for surveillance purposes.

Carmen Cracknell: Well, it sounds like written comms, eComms is definitely the most complex of all. And like you say, that’s kind of what we do here at Global Relay. So I mean, with multiple languages, coded language, industry jargon, colloquialisms, emojis, like you said, and also the fact that you might be handling cross-jurisdictional messaging, is there a very heavy reliance right now on AI to kind of fix this and develop better and better programs to speed up this process? Kind of where is the industry at right now, would you say?

Emily Wright: Definitely. I think if the industry thought that there was a magic wand to wave, it was going to be AI. And the question is what that will look like and how quickly it will come. So it’s definitely a really significant challenge for doing good surveillance. And technology really has to be the answer because if technology doesn’t solve it, your only alternative is something that’s just going to be impossible, which is prohibiting the use of mixed languages and emojis and jargon. And we just know that won’t work in the industry.

The execution of business just happens that way. So the current tools haven’t quite reached the point of resolving these, but we’re seeing really fast transformations and some of them are really quite powerful. So you’ll sort of see a period of time where there’s a lot of promise and a lot of people testing vendor tools and saying, oh, we’re not quite getting what we want. And then what we’ve seen is bursts of change and the industry moves very, very quickly.

So all you need is a little bit more of that burst of change. And maybe AI is the solution then. I think there’s a little bit of work that can happen between practitioners of surveillance, helping to imagine what the solutions look like as they partner with technologists. And I would say that it’s something I reference in the book and it’s something that generally I’ve observed is that’s not necessarily how things are set up to encourage that kind of participation and shared solutioning, but it could move things along quicker.

So if you think about where we are now, natural linguistic programming or NLP models aren’t really new to surveillance. They’ve been deployed. They’re usually deployed, like I said, alongside lexicons. And the promise of NLP is that it can deliver the best coverage with less of those false positives and alerts. And that’s the promise. And I would say that most institutions haven’t gotten 100 percent comfortable with that yet because we’re still seeing lexicons deployed alongside. But the other piece to solving this problem isn’t just how good the model is. It’s how good the model is and how well it can translate, which is usually a separate set of AI.

So you’ve got translation built into some of these tools as well as the model that detects risk. And it’s the development of those two in parallel that really has to happen to bring this to a head and overcome the problem of how you deal with things like jargon, code words, potentially emojis. I mean, emojis are going to be open to interpretation. So I think that’s even a step beyond, but certainly on mixed language and potentially. And I’ve sort of chatted to a few institutions recently the use of very specific focused risk large language models, LLMs, working alongside NLP and translation tools. So you’ve potentially got three different types of AI that are having to work alongside each other to solve this problem. But yes, I haven’t talked to anyone who thinks this is solved any other way than by technology.

Carmen Cracknell: Yeah, emojis seems like it would be a really hard one to do anything about. Does that include GIFs as well and other kind of image based messaging?

Emily Wright: So the difficulty is that you can send something like that. And where a picture tells a thousand words, the sender can claim that they only meant those 500 words of the thousand that you’ve interpreted. And so there’s a lot of room for interpretation. One image could be meant a certain way. It could be received a different way. And then you can have surveillance interpreting in a different way. So it may or may not just prompt a lot of conversations that never end up to real risk detection.

And that’s there are probably ways around that as well. But I think it’s probably much harder than where you’ve got people splicing together Korean with English or whether you’ve got people dropping in and out of traditional Chinese and putting in Kanto characters just to mix it up. Those kinds of things are going to be cleaner and easier for the AI to overcome.

Carmen Cracknell: Yeah, I wanted to talk a bit about cases, some big cases, some pivotal moments. You mentioned the LIBOR scandal earlier and how that had fit in with your career. Was that a pivotal moment in terms of employee surveillance?

Emily Wright: It’s a long time ago now, but it is definitely a turning point. I think the views in financial services, the shaking of confidence that happened in the City around this kind of investigation and what was perceived practices, common practices, then not being ethical and not within regulatory standards. I think culturally, the whole City probably shifted a little bit and moved towards something we haven’t really stepped back from, which is a much stronger focus on regulation and oversight going forward, whether it was both.

Probably the regulators took that view, but also it was felt throughout the City. Much of the discussion I think we’re having today about culture and financial services probably came from those original discussions back then. It came from that period. ECOMS programs had already probably originated. Certainly we were pulling out ECOMS. The archiving was taking place, even if they weren’t necessarily being systematically surveilled. But the current form that we’ve got now in surveillance, I mean, it would be unrecognizable to somebody sitting in 2012. Yeah, it’s definitely evolved a long way in that decade or so.

Carmen Cracknell: And what about the more recent huge fine levied against JP Morgan? What came after that?

Emily Wright: So, yeah, so I actually wrote the book during the latter part of 2023 and finished writing pens down in December last year. But in March this year, so just a few weeks ago, we saw the biggest ever fine that was issued for a surveillance program and it was the JP Morgan fine. It was issued specifically for trade surveillance deficiencies and it had a combined value of three hundred and forty nine million dollars. And just to put that in context, Citigroup’s 2022 fine, which was comparable, right? It was the first fine where there hadn’t been any abusive behavior. It was a genuinely a surveillance program deficiency. So the trade surveillance program again was cited.

That was issued by the FCA that was twenty two times smaller than the JP Morgan fine. So just to give you an idea that within 18 months between those two fines, both huge US institutions and both trade surveillance failings, just where the US regulators have taken this to now as a level of violation, I think is really interesting. There isn’t a huge amount of detail available that there never is on these things unless you’re in house.

But what is available means that we know that the remediation programs for both look very similar. They’re all around trade surveillance program enhancements. They both read in a very straightforward way to my thinking. The difference probably is that the FCA cited a two to three year period for Citigroup and the Fed specifically has cited nine full years of JP Morgan’s trade surveillance program being inadequate.

So that’s really the quantifiable difference. But the similarity is that they are really looking at cases where there isn’t employee misconduct. There hasn’t been a market manipulation to take place that’s known of or that’s been identified. There isn’t even any reference to impact on markets. It’s specifically around the inadequacy of trade surveillance programs.

So I think to underestimate the importance of these programs on the back of what has now been 18 months of and these are just the two big fines. That I’m referencing between these two between Citigroup and JP Morgan has been a number of other fines that have come through. And they’ve been although always proportional and the numbers aren’t as big. They’ve been really significant fines for those institutions they’ve been leveled at. So the remediation of these programs and setting them on the right track for the industry is clearly something the regulators both sides of the pond are holding in as very important to deliver.

Carmen Cracknell: I wanted to also ask you about the contrast between surveillance and privacy. So what are the kind of reasonable expectations and the contradictions between employers wanting to monitor employees while respecting their privacy, especially when it comes to things like social media, especially with AI getting involved and being potentially very intrusive and people being aware of that. And anti that. What do firms need to kind of think about going forward and take into consideration?

Emily Wright: So I think we probably have to unpick that. There’s a fair few things there. So if we think about privacy generally, surveillance programs and I’ve sort of set it out in the book, because I do think this has to be addressed. There’s been a lot of discussion in a lot of other sectors around surveillance being inappropriately used in invading people’s privacy. What’s happening in financial institutions at the moment, I don’t think does currently impinge on anybody’s privacy.

There isn’t really a dichotomy as it’s currently set up. You’ve got very risk focused surveillance taking place on bank owned data, on specified channels. It’s completely transparent that it takes place. It’s in employees contracts. It’s part of their licensing regulations. It’s a regulatory requirement for entities. You kind of go through all of those controls and checks and balances and there isn’t really an invasion of privacy. There’s a discussion about how much you do beyond what’s regulatory requirement. There’s a discussion about whether or not it affects the culture of an organization, how you position that kind of surveillance. But there’s definitely not a question that you’re invading privacy as it takes place.

I would also probably say that if you consider the ethical issues, one of the things that this kind of surveillance is in place to do is to detect risk of disorderly markets, of somebody manipulating markets, of financially benefiting at somebody else’s cost and to prevent that or detect it and prevent it from happening again and to mitigate a risk. And that is an ethical issue in itself. Right. So somebody behaving in that way and committing market abuse is one of the examples. That is an unethical act. And so the surveillance is in place to sort of offset that. So I think the ethical question around surveillance is kind of a little bit multi pronged and not necessarily as straightforward as where we’ve seen it in other places.

So the kind of outcomes that we would get in a surveillance program in a bank, you would usually know what risk you’re looking for. You’d be looking at bank data. It would come up if you were to investigate almost the very next thing that would happen if this was a concerning event or something that needed explaining. You would either be going and talking to the individual if that was appropriate or to somebody in the individual’s chain of command. So there isn’t anything happening in a covert way behind anybody’s back, which is probably a very long winded way of saying, I don’t think that there’s an ethical challenge at the moment.

So the WhatsApp cases, I think, that were provided as an example of where this has been challenged and and in the US, it’s been written up around a privacy case because obviously that’s where the focus of the fines have been. And you’ve got this discussion about people handing over personal devices, access to their WhatsApp channels and and potentially a challenge around the Fourth Amendment and whether or not you have to surrender that kind of personal information.

And it doesn’t really look like it’s going to play out as though people have to surrender. It does look like, you know, as the lawyers work their way through this, that challenge probably isn’t going to be met as people handing over personal devices and WhatsApp. What’s really going to happen is, is that institutions who have got an obligation to make sure all business communication happens on recorded channels are going to have to ensure that happens in a way that’s much more rigorous than what they’d previously done. And that without wanting to not give you the contentious answer, I’m sure you’re after, it doesn’t mean to sidestep the problem, but the problem doesn’t have to be a problem of privacy versus surveillance.

The surveillance has to be put in place around business communications and business communications have to take place on those approved channels where surveillance is. So then the discussion really is about culture and conduct. It’s really, if you’ve got a code of conduct policy that says business communications take place this way on these channels and you’ve got people who are violating that, you’ve got a culture problem, you’ve got a code of conduct problem, you don’t really have a surveillance problem. And you don’t have to tackle that by challenging somebody to surrender private information. You tackle that the way you tackle a code of conduct violation and there are frameworks in place to tackle those kinds of things.

So I think we can avoid this big challenge and individual privacy. Probably what we will see, and if we really think about it, where we’ll see that blurring of the lines between personal life and work life, we’re seeing it a little bit, is more the direction that the FCA is taking. So if we look at their expanded definition now of non-financial risks and how they’ve broadened and saying diversity inclusion is a non-financial risk and you look at the fit and proper requirements and say, well, sexual harassment outside of working hours, outside of workplace is something that makes you no longer fit and properly licensed.

Well, how are we going to find those things? Then you’re really going to have somebody saying, well, that’s my private life. That’s personal. I posted that photo on social media. And I think that’s where you might see it. And it’s going to be much less around surveillance. I think it’s going to be much more around this notion of being fit and proper in that context of non-financial risk that’s created by people’s conduct.

Carmen Cracknell: I wanted to ask you as well about how compliant surveillance can remain ethical, particularly in the age of AI tools. I guess you’ve kind of just answered that, but if you have anything else to add.

Emily Wright: The age of AI tools, I think we could go often a very made up kind of reality there because direction they’re going is unclear within financial services and specifically in surveillance. But I think it’s probably if we come back to those principles of what is ethical, is AI going to change that? And I think these tools used in these contexts, as long as you stick to the principles of being transparent, risk focused.

People are […] what channels there are and it’s in a regulatory environment where there’s a lot of scrutiny and checks. Ethics may not be the question for surveillance, but you could flip it around a little bit. Are these tools going to be used ethically in financial services? And that’s much less of a because if they come into financial services in an unethical way, there’ll be a way to also use them in surveillance. So I think part of it is just that unknown. But AI could create far more that we have to detect and that we have surveillance tools to detect. So the only reason we do surveillance is that there’s risk to detect.

I think if we really go a long way down the line on AI and the direction of travel for financial services, they’re kind of a different question. And that is we will likely be taking humans out of a lot of market practices. A lot of that risk in markets may no longer be created by humans. So we would be surveilling potentially just machines and algorithms. Is there such a thing as privacy or is there even such a thing as ethics when you’re surveilling an algorithm? And I probably the simple answer is no, we could have debate. But I actually wonder if AI takes some of that ethics stuff off the table anyway, because you’re replacing humans with machines.

Carmen Cracknell: Could you just tell any listeners where they can get hold of your book and when it’s available?

Emily Wright: Yes, of course. Yeah. So my book is Behind the Screens: Understanding Employee Surveillance in Financial Services. It’s available on Amazon. So pick your jurisdiction. It’s available on all of them. And it’s been available since the 29th of February. So that’s live.

Carmen Cracknell: Thank you so much Emily for joining us today on The GRIP Podcast.

Emily Wright: Brilliant. Thank you.

Listen to the audio.